00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 972 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3639 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.503 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.515 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.527 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.527 > git config core.sparsecheckout # timeout=10 00:00:05.537 > git read-tree -mu HEAD # timeout=10 00:00:05.552 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.568 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.568 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.656 [Pipeline] Start of Pipeline 00:00:05.673 [Pipeline] library 00:00:05.674 Loading library shm_lib@master 00:00:05.674 Library shm_lib@master is cached. Copying from home. 00:00:05.687 [Pipeline] node 00:00:05.713 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.714 [Pipeline] { 00:00:05.722 [Pipeline] catchError 00:00:05.723 [Pipeline] { 00:00:05.733 [Pipeline] wrap 00:00:05.739 [Pipeline] { 00:00:05.746 [Pipeline] stage 00:00:05.748 [Pipeline] { (Prologue) 00:00:05.764 [Pipeline] echo 00:00:05.765 Node: VM-host-SM9 00:00:05.771 [Pipeline] cleanWs 00:00:05.780 [WS-CLEANUP] Deleting project workspace... 00:00:05.780 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.785 [WS-CLEANUP] done 00:00:05.930 [Pipeline] setCustomBuildProperty 00:00:05.986 [Pipeline] httpRequest 00:00:06.317 [Pipeline] echo 00:00:06.319 Sorcerer 10.211.164.20 is alive 00:00:06.327 [Pipeline] retry 00:00:06.329 [Pipeline] { 00:00:06.343 [Pipeline] httpRequest 00:00:06.347 HttpMethod: GET 00:00:06.348 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.348 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.359 Response Code: HTTP/1.1 200 OK 00:00:06.360 Success: Status code 200 is in the accepted range: 200,404 00:00:06.360 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.167 [Pipeline] } 00:00:16.187 [Pipeline] // retry 00:00:16.195 [Pipeline] sh 00:00:16.478 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.495 [Pipeline] httpRequest 00:00:16.926 [Pipeline] echo 00:00:16.928 Sorcerer 10.211.164.20 is alive 00:00:16.938 [Pipeline] retry 00:00:16.940 [Pipeline] { 00:00:16.955 [Pipeline] httpRequest 00:00:16.960 HttpMethod: GET 00:00:16.961 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:16.961 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:16.979 Response Code: HTTP/1.1 200 OK 00:00:16.980 Success: Status code 200 is in the accepted range: 200,404 00:00:16.980 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:24.039 [Pipeline] } 00:01:24.052 [Pipeline] // retry 00:01:24.057 [Pipeline] sh 00:01:24.332 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:26.929 [Pipeline] sh 00:01:27.212 + git -C spdk log --oneline -n5 00:01:27.212 c13c99a5e test: Various fixes for Fedora40 00:01:27.212 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:27.212 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:27.212 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:27.212 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:27.232 [Pipeline] withCredentials 00:01:27.242 > git --version # timeout=10 00:01:27.254 > git --version # 'git version 2.39.2' 00:01:27.269 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:27.271 [Pipeline] { 00:01:27.279 [Pipeline] retry 00:01:27.281 [Pipeline] { 00:01:27.297 [Pipeline] sh 00:01:27.577 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:27.848 [Pipeline] } 00:01:27.865 [Pipeline] // retry 00:01:27.870 [Pipeline] } 00:01:27.885 [Pipeline] // withCredentials 00:01:27.895 [Pipeline] httpRequest 00:01:28.255 [Pipeline] echo 00:01:28.257 Sorcerer 10.211.164.20 is alive 00:01:28.265 [Pipeline] retry 00:01:28.267 [Pipeline] { 00:01:28.281 [Pipeline] httpRequest 00:01:28.285 HttpMethod: GET 00:01:28.286 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.286 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.296 Response Code: HTTP/1.1 200 OK 00:01:28.296 Success: Status code 200 is in the accepted range: 200,404 00:01:28.297 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:35.985 [Pipeline] } 00:01:36.001 [Pipeline] // retry 00:01:36.008 [Pipeline] sh 00:01:36.289 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.679 [Pipeline] sh 00:01:37.961 + git -C dpdk log --oneline -n5 00:01:37.961 caf0f5d395 version: 22.11.4 00:01:37.961 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:37.961 dc9c799c7d vhost: fix missing spinlock unlock 00:01:37.961 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:37.961 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:37.979 [Pipeline] writeFile 00:01:37.994 [Pipeline] sh 00:01:38.277 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:38.290 [Pipeline] sh 00:01:38.575 + cat autorun-spdk.conf 00:01:38.575 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.575 SPDK_TEST_NVMF=1 00:01:38.575 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.575 SPDK_TEST_URING=1 00:01:38.575 SPDK_TEST_USDT=1 00:01:38.575 SPDK_RUN_UBSAN=1 00:01:38.575 NET_TYPE=virt 00:01:38.575 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.575 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.575 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.583 RUN_NIGHTLY=1 00:01:38.585 [Pipeline] } 00:01:38.598 [Pipeline] // stage 00:01:38.613 [Pipeline] stage 00:01:38.615 [Pipeline] { (Run VM) 00:01:38.628 [Pipeline] sh 00:01:38.913 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:38.913 + echo 'Start stage prepare_nvme.sh' 00:01:38.913 Start stage prepare_nvme.sh 00:01:38.913 + [[ -n 5 ]] 00:01:38.913 + disk_prefix=ex5 00:01:38.913 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:38.913 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:38.913 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:38.913 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.913 ++ SPDK_TEST_NVMF=1 00:01:38.913 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.913 ++ SPDK_TEST_URING=1 00:01:38.913 ++ SPDK_TEST_USDT=1 00:01:38.913 ++ SPDK_RUN_UBSAN=1 00:01:38.913 ++ NET_TYPE=virt 00:01:38.913 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.913 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.913 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.913 ++ RUN_NIGHTLY=1 00:01:38.913 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:38.913 + nvme_files=() 00:01:38.913 + declare -A nvme_files 00:01:38.913 + backend_dir=/var/lib/libvirt/images/backends 00:01:38.913 + nvme_files['nvme.img']=5G 00:01:38.913 + nvme_files['nvme-cmb.img']=5G 00:01:38.913 + nvme_files['nvme-multi0.img']=4G 00:01:38.913 + nvme_files['nvme-multi1.img']=4G 00:01:38.913 + nvme_files['nvme-multi2.img']=4G 00:01:38.913 + nvme_files['nvme-openstack.img']=8G 00:01:38.913 + nvme_files['nvme-zns.img']=5G 00:01:38.913 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:38.913 + (( SPDK_TEST_FTL == 1 )) 00:01:38.913 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:38.913 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:38.913 + for nvme in "${!nvme_files[@]}" 00:01:38.913 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:38.913 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.913 + for nvme in "${!nvme_files[@]}" 00:01:38.913 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:38.913 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.913 + for nvme in "${!nvme_files[@]}" 00:01:38.913 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:39.173 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:39.173 + for nvme in "${!nvme_files[@]}" 00:01:39.173 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:39.173 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.173 + for nvme in "${!nvme_files[@]}" 00:01:39.173 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:39.173 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.173 + for nvme in "${!nvme_files[@]}" 00:01:39.173 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:39.434 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.434 + for nvme in "${!nvme_files[@]}" 00:01:39.434 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:39.434 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.434 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:39.434 + echo 'End stage prepare_nvme.sh' 00:01:39.434 End stage prepare_nvme.sh 00:01:39.446 [Pipeline] sh 00:01:39.731 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:39.731 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:39.731 00:01:39.731 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:39.731 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:39.731 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:39.731 HELP=0 00:01:39.731 DRY_RUN=0 00:01:39.731 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:39.731 NVME_DISKS_TYPE=nvme,nvme, 00:01:39.731 NVME_AUTO_CREATE=0 00:01:39.731 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:39.731 NVME_CMB=,, 00:01:39.731 NVME_PMR=,, 00:01:39.731 NVME_ZNS=,, 00:01:39.731 NVME_MS=,, 00:01:39.731 NVME_FDP=,, 00:01:39.731 SPDK_VAGRANT_DISTRO=fedora39 00:01:39.731 SPDK_VAGRANT_VMCPU=10 00:01:39.731 SPDK_VAGRANT_VMRAM=12288 00:01:39.731 SPDK_VAGRANT_PROVIDER=libvirt 00:01:39.731 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:39.731 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:39.731 SPDK_OPENSTACK_NETWORK=0 00:01:39.732 VAGRANT_PACKAGE_BOX=0 00:01:39.732 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:39.732 FORCE_DISTRO=true 00:01:39.732 VAGRANT_BOX_VERSION= 00:01:39.732 EXTRA_VAGRANTFILES= 00:01:39.732 NIC_MODEL=e1000 00:01:39.732 00:01:39.732 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:39.732 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:42.331 Bringing machine 'default' up with 'libvirt' provider... 00:01:42.899 ==> default: Creating image (snapshot of base box volume). 00:01:43.157 ==> default: Creating domain with the following settings... 00:01:43.157 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731867040_a25a143d8f73763cfb72 00:01:43.157 ==> default: -- Domain type: kvm 00:01:43.157 ==> default: -- Cpus: 10 00:01:43.157 ==> default: -- Feature: acpi 00:01:43.157 ==> default: -- Feature: apic 00:01:43.157 ==> default: -- Feature: pae 00:01:43.158 ==> default: -- Memory: 12288M 00:01:43.158 ==> default: -- Memory Backing: hugepages: 00:01:43.158 ==> default: -- Management MAC: 00:01:43.158 ==> default: -- Loader: 00:01:43.158 ==> default: -- Nvram: 00:01:43.158 ==> default: -- Base box: spdk/fedora39 00:01:43.158 ==> default: -- Storage pool: default 00:01:43.158 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731867040_a25a143d8f73763cfb72.img (20G) 00:01:43.158 ==> default: -- Volume Cache: default 00:01:43.158 ==> default: -- Kernel: 00:01:43.158 ==> default: -- Initrd: 00:01:43.158 ==> default: -- Graphics Type: vnc 00:01:43.158 ==> default: -- Graphics Port: -1 00:01:43.158 ==> default: -- Graphics IP: 127.0.0.1 00:01:43.158 ==> default: -- Graphics Password: Not defined 00:01:43.158 ==> default: -- Video Type: cirrus 00:01:43.158 ==> default: -- Video VRAM: 9216 00:01:43.158 ==> default: -- Sound Type: 00:01:43.158 ==> default: -- Keymap: en-us 00:01:43.158 ==> default: -- TPM Path: 00:01:43.158 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:43.158 ==> default: -- Command line args: 00:01:43.158 ==> default: -> value=-device, 00:01:43.158 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:43.158 ==> default: -> value=-drive, 00:01:43.158 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:43.158 ==> default: -> value=-device, 00:01:43.158 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.158 ==> default: -> value=-device, 00:01:43.158 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:43.158 ==> default: -> value=-drive, 00:01:43.158 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:43.158 ==> default: -> value=-device, 00:01:43.158 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.158 ==> default: -> value=-drive, 00:01:43.158 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:43.158 ==> default: -> value=-device, 00:01:43.158 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.158 ==> default: -> value=-drive, 00:01:43.158 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:43.158 ==> default: -> value=-device, 00:01:43.158 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.158 ==> default: Creating shared folders metadata... 00:01:43.158 ==> default: Starting domain. 00:01:44.539 ==> default: Waiting for domain to get an IP address... 00:02:02.629 ==> default: Waiting for SSH to become available... 00:02:03.566 ==> default: Configuring and enabling network interfaces... 00:02:07.758 default: SSH address: 192.168.121.22:22 00:02:07.758 default: SSH username: vagrant 00:02:07.758 default: SSH auth method: private key 00:02:10.293 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:16.883 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.450 ==> default: Mounting SSHFS shared folder... 00:02:24.386 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:24.386 ==> default: Checking Mount.. 00:02:25.801 ==> default: Folder Successfully Mounted! 00:02:25.801 ==> default: Running provisioner: file... 00:02:26.737 default: ~/.gitconfig => .gitconfig 00:02:26.995 00:02:26.995 SUCCESS! 00:02:26.995 00:02:26.995 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:26.995 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:26.995 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:26.995 00:02:27.003 [Pipeline] } 00:02:27.019 [Pipeline] // stage 00:02:27.028 [Pipeline] dir 00:02:27.029 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:27.031 [Pipeline] { 00:02:27.044 [Pipeline] catchError 00:02:27.045 [Pipeline] { 00:02:27.057 [Pipeline] sh 00:02:27.335 + vagrant ssh-config --host vagrant 00:02:27.335 + sed -ne /^Host/,$p 00:02:27.335 + tee ssh_conf 00:02:31.519 Host vagrant 00:02:31.519 HostName 192.168.121.22 00:02:31.519 User vagrant 00:02:31.519 Port 22 00:02:31.519 UserKnownHostsFile /dev/null 00:02:31.519 StrictHostKeyChecking no 00:02:31.519 PasswordAuthentication no 00:02:31.519 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:31.519 IdentitiesOnly yes 00:02:31.519 LogLevel FATAL 00:02:31.519 ForwardAgent yes 00:02:31.519 ForwardX11 yes 00:02:31.519 00:02:31.530 [Pipeline] withEnv 00:02:31.531 [Pipeline] { 00:02:31.544 [Pipeline] sh 00:02:31.821 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:31.821 source /etc/os-release 00:02:31.821 [[ -e /image.version ]] && img=$(< /image.version) 00:02:31.821 # Minimal, systemd-like check. 00:02:31.821 if [[ -e /.dockerenv ]]; then 00:02:31.821 # Clear garbage from the node's name: 00:02:31.821 # agt-er_autotest_547-896 -> autotest_547-896 00:02:31.821 # $HOSTNAME is the actual container id 00:02:31.821 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:31.821 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:31.821 # We can assume this is a mount from a host where container is running, 00:02:31.821 # so fetch its hostname to easily identify the target swarm worker. 00:02:31.821 container="$(< /etc/hostname) ($agent)" 00:02:31.821 else 00:02:31.821 # Fallback 00:02:31.821 container=$agent 00:02:31.821 fi 00:02:31.821 fi 00:02:31.821 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:31.821 00:02:32.091 [Pipeline] } 00:02:32.109 [Pipeline] // withEnv 00:02:32.116 [Pipeline] setCustomBuildProperty 00:02:32.130 [Pipeline] stage 00:02:32.132 [Pipeline] { (Tests) 00:02:32.150 [Pipeline] sh 00:02:32.432 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:32.704 [Pipeline] sh 00:02:32.983 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:33.257 [Pipeline] timeout 00:02:33.258 Timeout set to expire in 1 hr 0 min 00:02:33.259 [Pipeline] { 00:02:33.275 [Pipeline] sh 00:02:33.555 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:34.122 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:34.134 [Pipeline] sh 00:02:34.414 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:34.428 [Pipeline] sh 00:02:34.707 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:35.056 [Pipeline] sh 00:02:35.335 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:35.593 ++ readlink -f spdk_repo 00:02:35.593 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:35.593 + [[ -n /home/vagrant/spdk_repo ]] 00:02:35.593 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:35.593 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:35.593 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:35.593 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:35.593 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:35.593 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:35.593 + cd /home/vagrant/spdk_repo 00:02:35.593 + source /etc/os-release 00:02:35.593 ++ NAME='Fedora Linux' 00:02:35.593 ++ VERSION='39 (Cloud Edition)' 00:02:35.593 ++ ID=fedora 00:02:35.593 ++ VERSION_ID=39 00:02:35.593 ++ VERSION_CODENAME= 00:02:35.593 ++ PLATFORM_ID=platform:f39 00:02:35.593 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:35.593 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:35.593 ++ LOGO=fedora-logo-icon 00:02:35.593 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:35.593 ++ HOME_URL=https://fedoraproject.org/ 00:02:35.594 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:35.594 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:35.594 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:35.594 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:35.594 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:35.594 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:35.594 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:35.594 ++ SUPPORT_END=2024-11-12 00:02:35.594 ++ VARIANT='Cloud Edition' 00:02:35.594 ++ VARIANT_ID=cloud 00:02:35.594 + uname -a 00:02:35.594 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:35.594 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:35.594 Hugepages 00:02:35.594 node hugesize free / total 00:02:35.594 node0 1048576kB 0 / 0 00:02:35.594 node0 2048kB 0 / 0 00:02:35.594 00:02:35.594 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.594 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.594 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:35.594 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:35.594 + rm -f /tmp/spdk-ld-path 00:02:35.594 + source autorun-spdk.conf 00:02:35.594 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.594 ++ SPDK_TEST_NVMF=1 00:02:35.594 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.594 ++ SPDK_TEST_URING=1 00:02:35.594 ++ SPDK_TEST_USDT=1 00:02:35.594 ++ SPDK_RUN_UBSAN=1 00:02:35.594 ++ NET_TYPE=virt 00:02:35.594 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.594 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.594 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.594 ++ RUN_NIGHTLY=1 00:02:35.594 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.594 + [[ -n '' ]] 00:02:35.594 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.853 + for M in /var/spdk/build-*-manifest.txt 00:02:35.853 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.853 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.853 + for M in /var/spdk/build-*-manifest.txt 00:02:35.853 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.853 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.853 + for M in /var/spdk/build-*-manifest.txt 00:02:35.853 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.853 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.853 ++ uname 00:02:35.853 + [[ Linux == \L\i\n\u\x ]] 00:02:35.853 + sudo dmesg -T 00:02:35.853 + sudo dmesg --clear 00:02:35.853 + dmesg_pid=5967 00:02:35.853 + [[ Fedora Linux == FreeBSD ]] 00:02:35.853 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.853 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.853 + sudo dmesg -Tw 00:02:35.853 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.853 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.853 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.853 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.853 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.853 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.853 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.853 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.853 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.853 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.853 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.853 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.853 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.853 Test configuration: 00:02:35.853 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.853 SPDK_TEST_NVMF=1 00:02:35.853 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.853 SPDK_TEST_URING=1 00:02:35.853 SPDK_TEST_USDT=1 00:02:35.853 SPDK_RUN_UBSAN=1 00:02:35.853 NET_TYPE=virt 00:02:35.853 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.853 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.853 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.853 RUN_NIGHTLY=1 18:11:34 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:35.853 18:11:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.853 18:11:34 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.853 18:11:34 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.853 18:11:34 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.853 18:11:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.853 18:11:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.853 18:11:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.853 18:11:34 -- paths/export.sh@5 -- $ export PATH 00:02:35.853 18:11:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.853 18:11:34 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.853 18:11:34 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:35.853 18:11:34 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731867094.XXXXXX 00:02:35.853 18:11:34 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731867094.RJ6edq 00:02:35.853 18:11:34 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:35.853 18:11:34 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:35.853 18:11:34 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.853 18:11:34 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:35.853 18:11:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.853 18:11:34 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.854 18:11:34 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:35.854 18:11:34 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:35.854 18:11:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.854 18:11:34 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:35.854 18:11:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:35.854 18:11:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:35.854 18:11:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:35.854 18:11:34 -- spdk/autobuild.sh@16 -- $ date -u 00:02:35.854 Sun Nov 17 06:11:34 PM UTC 2024 00:02:35.854 18:11:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:35.854 LTS-67-gc13c99a5e 00:02:35.854 18:11:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:35.854 18:11:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:35.854 18:11:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:35.854 18:11:34 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:35.854 18:11:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:35.854 18:11:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.854 ************************************ 00:02:35.854 START TEST ubsan 00:02:35.854 ************************************ 00:02:35.854 using ubsan 00:02:35.854 18:11:34 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:35.854 00:02:35.854 real 0m0.000s 00:02:35.854 user 0m0.000s 00:02:35.854 sys 0m0.000s 00:02:35.854 18:11:34 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:35.854 ************************************ 00:02:35.854 END TEST ubsan 00:02:35.854 ************************************ 00:02:35.854 18:11:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.113 18:11:34 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:36.113 18:11:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.113 18:11:34 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.113 18:11:34 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:36.113 18:11:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:36.113 18:11:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.113 ************************************ 00:02:36.113 START TEST build_native_dpdk 00:02:36.113 ************************************ 00:02:36.113 18:11:34 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:36.113 18:11:34 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.113 18:11:34 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.113 18:11:34 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.113 18:11:34 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.113 18:11:34 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.113 18:11:34 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.113 18:11:34 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.113 18:11:34 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.113 18:11:34 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.113 18:11:34 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.113 18:11:34 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.113 18:11:34 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.113 18:11:34 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.113 18:11:34 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.113 18:11:34 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.113 18:11:34 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.113 18:11:34 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.113 caf0f5d395 version: 22.11.4 00:02:36.113 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:36.113 dc9c799c7d vhost: fix missing spinlock unlock 00:02:36.113 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:36.113 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:36.113 18:11:34 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.113 18:11:34 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.113 18:11:34 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:36.113 18:11:34 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.113 18:11:34 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.113 18:11:34 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:36.113 18:11:34 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:36.113 18:11:34 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:36.113 18:11:34 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.113 18:11:34 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:36.113 18:11:34 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:36.113 18:11:34 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:36.113 18:11:34 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:36.113 18:11:34 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:36.113 18:11:34 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:36.113 18:11:34 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:36.113 18:11:34 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:36.113 18:11:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.113 18:11:34 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:36.113 18:11:34 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:36.113 18:11:34 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:36.113 18:11:34 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:36.113 18:11:34 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:36.113 18:11:34 -- scripts/common.sh@343 -- $ case "$op" in 00:02:36.113 18:11:34 -- scripts/common.sh@344 -- $ : 1 00:02:36.113 18:11:34 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:36.113 18:11:34 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.113 18:11:34 -- scripts/common.sh@364 -- $ decimal 22 00:02:36.113 18:11:34 -- scripts/common.sh@352 -- $ local d=22 00:02:36.113 18:11:34 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.113 18:11:34 -- scripts/common.sh@354 -- $ echo 22 00:02:36.113 18:11:34 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:36.113 18:11:34 -- scripts/common.sh@365 -- $ decimal 21 00:02:36.113 18:11:34 -- scripts/common.sh@352 -- $ local d=21 00:02:36.113 18:11:34 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.113 18:11:34 -- scripts/common.sh@354 -- $ echo 21 00:02:36.113 18:11:34 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:36.113 18:11:34 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:36.113 18:11:34 -- scripts/common.sh@366 -- $ return 1 00:02:36.113 18:11:34 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:36.113 patching file config/rte_config.h 00:02:36.113 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.113 18:11:34 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:36.113 18:11:34 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:36.113 18:11:34 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:36.113 18:11:34 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:36.113 18:11:34 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:36.113 18:11:34 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:36.113 18:11:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.113 18:11:34 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:36.113 18:11:34 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:36.113 18:11:34 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:36.113 18:11:34 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:36.113 18:11:34 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:36.113 18:11:34 -- scripts/common.sh@343 -- $ case "$op" in 00:02:36.113 18:11:34 -- scripts/common.sh@344 -- $ : 1 00:02:36.113 18:11:34 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:36.113 18:11:34 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.113 18:11:34 -- scripts/common.sh@364 -- $ decimal 22 00:02:36.113 18:11:34 -- scripts/common.sh@352 -- $ local d=22 00:02:36.113 18:11:34 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.113 18:11:34 -- scripts/common.sh@354 -- $ echo 22 00:02:36.113 18:11:34 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:36.113 18:11:34 -- scripts/common.sh@365 -- $ decimal 24 00:02:36.113 18:11:34 -- scripts/common.sh@352 -- $ local d=24 00:02:36.113 18:11:34 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.114 18:11:34 -- scripts/common.sh@354 -- $ echo 24 00:02:36.114 18:11:34 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:36.114 18:11:34 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:36.114 18:11:34 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:36.114 18:11:34 -- scripts/common.sh@367 -- $ return 0 00:02:36.114 18:11:34 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:36.114 patching file lib/pcapng/rte_pcapng.c 00:02:36.114 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:36.114 18:11:34 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:36.114 18:11:34 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:36.114 18:11:34 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:36.114 18:11:34 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:36.114 18:11:34 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:41.385 The Meson build system 00:02:41.385 Version: 1.5.0 00:02:41.385 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:41.385 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:41.385 Build type: native build 00:02:41.385 Program cat found: YES (/usr/bin/cat) 00:02:41.385 Project name: DPDK 00:02:41.385 Project version: 22.11.4 00:02:41.385 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.385 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:41.385 Host machine cpu family: x86_64 00:02:41.385 Host machine cpu: x86_64 00:02:41.385 Message: ## Building in Developer Mode ## 00:02:41.385 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.385 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:41.385 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.385 Program objdump found: YES (/usr/bin/objdump) 00:02:41.385 Program python3 found: YES (/usr/bin/python3) 00:02:41.385 Program cat found: YES (/usr/bin/cat) 00:02:41.385 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:41.385 Checking for size of "void *" : 8 00:02:41.385 Checking for size of "void *" : 8 (cached) 00:02:41.385 Library m found: YES 00:02:41.385 Library numa found: YES 00:02:41.385 Has header "numaif.h" : YES 00:02:41.385 Library fdt found: NO 00:02:41.385 Library execinfo found: NO 00:02:41.385 Has header "execinfo.h" : YES 00:02:41.385 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.385 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.385 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.385 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.385 Run-time dependency openssl found: YES 3.1.1 00:02:41.385 Run-time dependency libpcap found: YES 1.10.4 00:02:41.385 Has header "pcap.h" with dependency libpcap: YES 00:02:41.385 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.385 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.385 Compiler for C supports arguments -Wformat: YES 00:02:41.385 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.385 Compiler for C supports arguments -Wformat-security: NO 00:02:41.385 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.385 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.385 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.385 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.385 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.385 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.385 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.385 Compiler for C supports arguments -Wundef: YES 00:02:41.385 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.385 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.385 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.385 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.385 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.385 Compiler for C supports arguments -mavx512f: YES 00:02:41.385 Checking if "AVX512 checking" compiles: YES 00:02:41.385 Fetching value of define "__SSE4_2__" : 1 00:02:41.385 Fetching value of define "__AES__" : 1 00:02:41.385 Fetching value of define "__AVX__" : 1 00:02:41.385 Fetching value of define "__AVX2__" : 1 00:02:41.385 Fetching value of define "__AVX512BW__" : (undefined) 00:02:41.385 Fetching value of define "__AVX512CD__" : (undefined) 00:02:41.385 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:41.385 Fetching value of define "__AVX512F__" : (undefined) 00:02:41.385 Fetching value of define "__AVX512VL__" : (undefined) 00:02:41.385 Fetching value of define "__PCLMUL__" : 1 00:02:41.385 Fetching value of define "__RDRND__" : 1 00:02:41.385 Fetching value of define "__RDSEED__" : 1 00:02:41.385 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:41.385 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.385 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.385 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.385 Checking for function "getentropy" : YES 00:02:41.385 Message: lib/eal: Defining dependency "eal" 00:02:41.385 Message: lib/ring: Defining dependency "ring" 00:02:41.385 Message: lib/rcu: Defining dependency "rcu" 00:02:41.385 Message: lib/mempool: Defining dependency "mempool" 00:02:41.385 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.385 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.385 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.385 Compiler for C supports arguments -mpclmul: YES 00:02:41.385 Compiler for C supports arguments -maes: YES 00:02:41.385 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.385 Compiler for C supports arguments -mavx512bw: YES 00:02:41.385 Compiler for C supports arguments -mavx512dq: YES 00:02:41.385 Compiler for C supports arguments -mavx512vl: YES 00:02:41.385 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.385 Compiler for C supports arguments -mavx2: YES 00:02:41.385 Compiler for C supports arguments -mavx: YES 00:02:41.385 Message: lib/net: Defining dependency "net" 00:02:41.385 Message: lib/meter: Defining dependency "meter" 00:02:41.385 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.385 Message: lib/pci: Defining dependency "pci" 00:02:41.385 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.385 Message: lib/metrics: Defining dependency "metrics" 00:02:41.385 Message: lib/hash: Defining dependency "hash" 00:02:41.385 Message: lib/timer: Defining dependency "timer" 00:02:41.385 Fetching value of define "__AVX2__" : 1 (cached) 00:02:41.385 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.385 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:41.385 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:41.386 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:41.386 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:41.386 Message: lib/acl: Defining dependency "acl" 00:02:41.386 Message: lib/bbdev: Defining dependency "bbdev" 00:02:41.386 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:41.386 Run-time dependency libelf found: YES 0.191 00:02:41.386 Message: lib/bpf: Defining dependency "bpf" 00:02:41.386 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:41.386 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.386 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.386 Message: lib/distributor: Defining dependency "distributor" 00:02:41.386 Message: lib/efd: Defining dependency "efd" 00:02:41.386 Message: lib/eventdev: Defining dependency "eventdev" 00:02:41.386 Message: lib/gpudev: Defining dependency "gpudev" 00:02:41.386 Message: lib/gro: Defining dependency "gro" 00:02:41.386 Message: lib/gso: Defining dependency "gso" 00:02:41.386 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:41.386 Message: lib/jobstats: Defining dependency "jobstats" 00:02:41.386 Message: lib/latencystats: Defining dependency "latencystats" 00:02:41.386 Message: lib/lpm: Defining dependency "lpm" 00:02:41.386 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.386 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:41.386 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:41.386 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:41.386 Message: lib/member: Defining dependency "member" 00:02:41.386 Message: lib/pcapng: Defining dependency "pcapng" 00:02:41.386 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.386 Message: lib/power: Defining dependency "power" 00:02:41.386 Message: lib/rawdev: Defining dependency "rawdev" 00:02:41.386 Message: lib/regexdev: Defining dependency "regexdev" 00:02:41.386 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.386 Message: lib/rib: Defining dependency "rib" 00:02:41.386 Message: lib/reorder: Defining dependency "reorder" 00:02:41.386 Message: lib/sched: Defining dependency "sched" 00:02:41.386 Message: lib/security: Defining dependency "security" 00:02:41.386 Message: lib/stack: Defining dependency "stack" 00:02:41.386 Has header "linux/userfaultfd.h" : YES 00:02:41.386 Message: lib/vhost: Defining dependency "vhost" 00:02:41.386 Message: lib/ipsec: Defining dependency "ipsec" 00:02:41.386 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.386 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:41.386 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:41.386 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:41.386 Message: lib/fib: Defining dependency "fib" 00:02:41.386 Message: lib/port: Defining dependency "port" 00:02:41.386 Message: lib/pdump: Defining dependency "pdump" 00:02:41.386 Message: lib/table: Defining dependency "table" 00:02:41.386 Message: lib/pipeline: Defining dependency "pipeline" 00:02:41.386 Message: lib/graph: Defining dependency "graph" 00:02:41.386 Message: lib/node: Defining dependency "node" 00:02:41.386 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.386 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.386 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.386 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.386 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:41.386 Compiler for C supports arguments -Wno-unused-value: YES 00:02:41.386 Compiler for C supports arguments -Wno-format: YES 00:02:41.386 Compiler for C supports arguments -Wno-format-security: YES 00:02:41.386 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:43.290 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:43.290 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:43.290 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:43.290 Fetching value of define "__AVX2__" : 1 (cached) 00:02:43.290 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:43.290 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.290 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:43.290 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:43.290 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:43.290 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.290 Configuring doxy-api.conf using configuration 00:02:43.290 Program sphinx-build found: NO 00:02:43.290 Configuring rte_build_config.h using configuration 00:02:43.290 Message: 00:02:43.290 ================= 00:02:43.290 Applications Enabled 00:02:43.290 ================= 00:02:43.290 00:02:43.290 apps: 00:02:43.290 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:43.290 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:43.290 test-security-perf, 00:02:43.290 00:02:43.290 Message: 00:02:43.290 ================= 00:02:43.290 Libraries Enabled 00:02:43.290 ================= 00:02:43.290 00:02:43.290 libs: 00:02:43.290 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:43.290 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:43.290 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:43.290 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:43.290 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:43.290 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:43.290 table, pipeline, graph, node, 00:02:43.290 00:02:43.290 Message: 00:02:43.290 =============== 00:02:43.290 Drivers Enabled 00:02:43.290 =============== 00:02:43.290 00:02:43.290 common: 00:02:43.290 00:02:43.290 bus: 00:02:43.290 pci, vdev, 00:02:43.290 mempool: 00:02:43.290 ring, 00:02:43.290 dma: 00:02:43.290 00:02:43.290 net: 00:02:43.290 i40e, 00:02:43.290 raw: 00:02:43.290 00:02:43.290 crypto: 00:02:43.290 00:02:43.290 compress: 00:02:43.290 00:02:43.290 regex: 00:02:43.290 00:02:43.290 vdpa: 00:02:43.290 00:02:43.290 event: 00:02:43.290 00:02:43.290 baseband: 00:02:43.290 00:02:43.290 gpu: 00:02:43.290 00:02:43.290 00:02:43.290 Message: 00:02:43.290 ================= 00:02:43.290 Content Skipped 00:02:43.290 ================= 00:02:43.290 00:02:43.290 apps: 00:02:43.290 00:02:43.290 libs: 00:02:43.290 kni: explicitly disabled via build config (deprecated lib) 00:02:43.290 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:43.290 00:02:43.290 drivers: 00:02:43.290 common/cpt: not in enabled drivers build config 00:02:43.290 common/dpaax: not in enabled drivers build config 00:02:43.290 common/iavf: not in enabled drivers build config 00:02:43.290 common/idpf: not in enabled drivers build config 00:02:43.290 common/mvep: not in enabled drivers build config 00:02:43.290 common/octeontx: not in enabled drivers build config 00:02:43.290 bus/auxiliary: not in enabled drivers build config 00:02:43.290 bus/dpaa: not in enabled drivers build config 00:02:43.290 bus/fslmc: not in enabled drivers build config 00:02:43.290 bus/ifpga: not in enabled drivers build config 00:02:43.290 bus/vmbus: not in enabled drivers build config 00:02:43.290 common/cnxk: not in enabled drivers build config 00:02:43.290 common/mlx5: not in enabled drivers build config 00:02:43.290 common/qat: not in enabled drivers build config 00:02:43.290 common/sfc_efx: not in enabled drivers build config 00:02:43.290 mempool/bucket: not in enabled drivers build config 00:02:43.290 mempool/cnxk: not in enabled drivers build config 00:02:43.290 mempool/dpaa: not in enabled drivers build config 00:02:43.290 mempool/dpaa2: not in enabled drivers build config 00:02:43.290 mempool/octeontx: not in enabled drivers build config 00:02:43.290 mempool/stack: not in enabled drivers build config 00:02:43.290 dma/cnxk: not in enabled drivers build config 00:02:43.290 dma/dpaa: not in enabled drivers build config 00:02:43.290 dma/dpaa2: not in enabled drivers build config 00:02:43.290 dma/hisilicon: not in enabled drivers build config 00:02:43.290 dma/idxd: not in enabled drivers build config 00:02:43.290 dma/ioat: not in enabled drivers build config 00:02:43.290 dma/skeleton: not in enabled drivers build config 00:02:43.290 net/af_packet: not in enabled drivers build config 00:02:43.290 net/af_xdp: not in enabled drivers build config 00:02:43.290 net/ark: not in enabled drivers build config 00:02:43.290 net/atlantic: not in enabled drivers build config 00:02:43.290 net/avp: not in enabled drivers build config 00:02:43.290 net/axgbe: not in enabled drivers build config 00:02:43.290 net/bnx2x: not in enabled drivers build config 00:02:43.290 net/bnxt: not in enabled drivers build config 00:02:43.290 net/bonding: not in enabled drivers build config 00:02:43.290 net/cnxk: not in enabled drivers build config 00:02:43.290 net/cxgbe: not in enabled drivers build config 00:02:43.290 net/dpaa: not in enabled drivers build config 00:02:43.290 net/dpaa2: not in enabled drivers build config 00:02:43.290 net/e1000: not in enabled drivers build config 00:02:43.290 net/ena: not in enabled drivers build config 00:02:43.290 net/enetc: not in enabled drivers build config 00:02:43.290 net/enetfec: not in enabled drivers build config 00:02:43.290 net/enic: not in enabled drivers build config 00:02:43.290 net/failsafe: not in enabled drivers build config 00:02:43.290 net/fm10k: not in enabled drivers build config 00:02:43.290 net/gve: not in enabled drivers build config 00:02:43.290 net/hinic: not in enabled drivers build config 00:02:43.290 net/hns3: not in enabled drivers build config 00:02:43.290 net/iavf: not in enabled drivers build config 00:02:43.290 net/ice: not in enabled drivers build config 00:02:43.290 net/idpf: not in enabled drivers build config 00:02:43.290 net/igc: not in enabled drivers build config 00:02:43.290 net/ionic: not in enabled drivers build config 00:02:43.290 net/ipn3ke: not in enabled drivers build config 00:02:43.290 net/ixgbe: not in enabled drivers build config 00:02:43.290 net/kni: not in enabled drivers build config 00:02:43.290 net/liquidio: not in enabled drivers build config 00:02:43.290 net/mana: not in enabled drivers build config 00:02:43.290 net/memif: not in enabled drivers build config 00:02:43.290 net/mlx4: not in enabled drivers build config 00:02:43.290 net/mlx5: not in enabled drivers build config 00:02:43.290 net/mvneta: not in enabled drivers build config 00:02:43.290 net/mvpp2: not in enabled drivers build config 00:02:43.290 net/netvsc: not in enabled drivers build config 00:02:43.290 net/nfb: not in enabled drivers build config 00:02:43.290 net/nfp: not in enabled drivers build config 00:02:43.290 net/ngbe: not in enabled drivers build config 00:02:43.290 net/null: not in enabled drivers build config 00:02:43.290 net/octeontx: not in enabled drivers build config 00:02:43.290 net/octeon_ep: not in enabled drivers build config 00:02:43.291 net/pcap: not in enabled drivers build config 00:02:43.291 net/pfe: not in enabled drivers build config 00:02:43.291 net/qede: not in enabled drivers build config 00:02:43.291 net/ring: not in enabled drivers build config 00:02:43.291 net/sfc: not in enabled drivers build config 00:02:43.291 net/softnic: not in enabled drivers build config 00:02:43.291 net/tap: not in enabled drivers build config 00:02:43.291 net/thunderx: not in enabled drivers build config 00:02:43.291 net/txgbe: not in enabled drivers build config 00:02:43.291 net/vdev_netvsc: not in enabled drivers build config 00:02:43.291 net/vhost: not in enabled drivers build config 00:02:43.291 net/virtio: not in enabled drivers build config 00:02:43.291 net/vmxnet3: not in enabled drivers build config 00:02:43.291 raw/cnxk_bphy: not in enabled drivers build config 00:02:43.291 raw/cnxk_gpio: not in enabled drivers build config 00:02:43.291 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:43.291 raw/ifpga: not in enabled drivers build config 00:02:43.291 raw/ntb: not in enabled drivers build config 00:02:43.291 raw/skeleton: not in enabled drivers build config 00:02:43.291 crypto/armv8: not in enabled drivers build config 00:02:43.291 crypto/bcmfs: not in enabled drivers build config 00:02:43.291 crypto/caam_jr: not in enabled drivers build config 00:02:43.291 crypto/ccp: not in enabled drivers build config 00:02:43.291 crypto/cnxk: not in enabled drivers build config 00:02:43.291 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.291 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.291 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.291 crypto/mlx5: not in enabled drivers build config 00:02:43.291 crypto/mvsam: not in enabled drivers build config 00:02:43.291 crypto/nitrox: not in enabled drivers build config 00:02:43.291 crypto/null: not in enabled drivers build config 00:02:43.291 crypto/octeontx: not in enabled drivers build config 00:02:43.291 crypto/openssl: not in enabled drivers build config 00:02:43.291 crypto/scheduler: not in enabled drivers build config 00:02:43.291 crypto/uadk: not in enabled drivers build config 00:02:43.291 crypto/virtio: not in enabled drivers build config 00:02:43.291 compress/isal: not in enabled drivers build config 00:02:43.291 compress/mlx5: not in enabled drivers build config 00:02:43.291 compress/octeontx: not in enabled drivers build config 00:02:43.291 compress/zlib: not in enabled drivers build config 00:02:43.291 regex/mlx5: not in enabled drivers build config 00:02:43.291 regex/cn9k: not in enabled drivers build config 00:02:43.291 vdpa/ifc: not in enabled drivers build config 00:02:43.291 vdpa/mlx5: not in enabled drivers build config 00:02:43.291 vdpa/sfc: not in enabled drivers build config 00:02:43.291 event/cnxk: not in enabled drivers build config 00:02:43.291 event/dlb2: not in enabled drivers build config 00:02:43.291 event/dpaa: not in enabled drivers build config 00:02:43.291 event/dpaa2: not in enabled drivers build config 00:02:43.291 event/dsw: not in enabled drivers build config 00:02:43.291 event/opdl: not in enabled drivers build config 00:02:43.291 event/skeleton: not in enabled drivers build config 00:02:43.291 event/sw: not in enabled drivers build config 00:02:43.291 event/octeontx: not in enabled drivers build config 00:02:43.291 baseband/acc: not in enabled drivers build config 00:02:43.291 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:43.291 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:43.291 baseband/la12xx: not in enabled drivers build config 00:02:43.291 baseband/null: not in enabled drivers build config 00:02:43.291 baseband/turbo_sw: not in enabled drivers build config 00:02:43.291 gpu/cuda: not in enabled drivers build config 00:02:43.291 00:02:43.291 00:02:43.291 Build targets in project: 314 00:02:43.291 00:02:43.291 DPDK 22.11.4 00:02:43.291 00:02:43.291 User defined options 00:02:43.291 libdir : lib 00:02:43.291 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:43.291 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:43.291 c_link_args : 00:02:43.291 enable_docs : false 00:02:43.291 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.291 enable_kmods : false 00:02:43.291 machine : native 00:02:43.291 tests : false 00:02:43.291 00:02:43.291 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.291 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:43.291 18:11:41 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:43.291 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:43.291 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:43.291 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:43.291 [3/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:43.291 [4/743] Generating lib/rte_telemetry_def with a custom command 00:02:43.291 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.291 [6/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.291 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.291 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.291 [9/743] Linking static target lib/librte_kvargs.a 00:02:43.291 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.291 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.291 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.291 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.291 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.291 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.549 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.549 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.549 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.549 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.549 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.549 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.549 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:43.549 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:43.549 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.549 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.549 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.549 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.808 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.808 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.808 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.808 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.808 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.808 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.808 [34/743] Linking static target lib/librte_telemetry.a 00:02:43.808 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.808 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.808 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.066 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.066 [39/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:44.066 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.066 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.066 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.066 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.066 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.324 [45/743] Linking target lib/librte_telemetry.so.23.0 00:02:44.324 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.324 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.324 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.324 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.324 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:44.324 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.324 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.324 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.324 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.324 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.324 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.324 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.324 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.324 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.582 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.582 [61/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.582 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.582 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.582 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.582 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:44.582 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.582 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.582 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.582 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.582 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.841 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.841 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.841 [73/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.841 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.841 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.841 [76/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.841 [77/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.841 [78/743] Generating lib/rte_eal_def with a custom command 00:02:44.841 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.841 [80/743] Generating lib/rte_eal_mingw with a custom command 00:02:44.841 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.841 [82/743] Generating lib/rte_ring_def with a custom command 00:02:44.841 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:44.841 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:44.841 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:44.841 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.099 [87/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.099 [88/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.099 [89/743] Linking static target lib/librte_ring.a 00:02:45.099 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:45.099 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:45.099 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.099 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.357 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.357 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.357 [96/743] Linking static target lib/librte_eal.a 00:02:45.616 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.616 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:45.616 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.616 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:45.616 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.616 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.616 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.616 [104/743] Linking static target lib/librte_rcu.a 00:02:45.616 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.874 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.874 [107/743] Linking static target lib/librte_mempool.a 00:02:46.132 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.132 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.132 [110/743] Generating lib/rte_net_def with a custom command 00:02:46.132 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:46.132 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:46.132 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:46.132 [114/743] Generating lib/rte_meter_def with a custom command 00:02:46.132 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:46.390 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.390 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.390 [118/743] Linking static target lib/librte_meter.a 00:02:46.390 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.390 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.390 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.390 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.648 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.648 [124/743] Linking static target lib/librte_mbuf.a 00:02:46.648 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.648 [126/743] Linking static target lib/librte_net.a 00:02:46.648 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.906 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.906 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:47.164 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:47.164 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.164 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.164 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.164 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.422 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.680 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.680 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:47.680 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:47.680 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.938 [140/743] Generating lib/rte_pci_def with a custom command 00:02:47.938 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.938 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:47.938 [143/743] Linking static target lib/librte_pci.a 00:02:47.938 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.938 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.938 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.938 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.938 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.938 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.938 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.214 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.214 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.214 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.214 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.214 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.214 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.214 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:48.214 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.214 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:48.214 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:48.214 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.214 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:48.480 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.480 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:48.480 [165/743] Generating lib/rte_hash_def with a custom command 00:02:48.480 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.480 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:48.481 [168/743] Generating lib/rte_timer_def with a custom command 00:02:48.481 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:48.481 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.481 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.481 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.481 [173/743] Linking static target lib/librte_cmdline.a 00:02:49.047 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:49.047 [175/743] Linking static target lib/librte_metrics.a 00:02:49.047 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.047 [177/743] Linking static target lib/librte_timer.a 00:02:49.304 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.304 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.561 [180/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:49.561 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.561 [182/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.561 [183/743] Linking static target lib/librte_ethdev.a 00:02:49.561 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.126 [185/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:50.126 [186/743] Generating lib/rte_acl_def with a custom command 00:02:50.126 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:50.126 [188/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:50.126 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:50.126 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:50.126 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:50.126 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:50.126 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:50.692 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:50.950 [195/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:50.950 [196/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:50.950 [197/743] Linking static target lib/librte_bitratestats.a 00:02:50.950 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.207 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:51.207 [200/743] Linking static target lib/librte_bbdev.a 00:02:51.207 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:51.463 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.463 [203/743] Linking static target lib/librte_hash.a 00:02:51.721 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:51.721 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.721 [206/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:51.721 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:51.721 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:51.721 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:51.978 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.978 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:52.236 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:52.236 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:52.236 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:52.236 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:52.236 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:52.495 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:52.495 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:52.495 [219/743] Linking static target lib/librte_cfgfile.a 00:02:52.495 [220/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:52.495 [221/743] Linking static target lib/librte_acl.a 00:02:52.495 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:52.495 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:52.495 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:52.753 [225/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.753 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.753 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.753 [228/743] Linking target lib/librte_eal.so.23.0 00:02:52.753 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.011 [230/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:53.011 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:53.011 [232/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:53.011 [233/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:53.011 [234/743] Linking target lib/librte_ring.so.23.0 00:02:53.011 [235/743] Linking target lib/librte_meter.so.23.0 00:02:53.011 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.011 [237/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:53.011 [238/743] Linking target lib/librte_pci.so.23.0 00:02:53.011 [239/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:53.011 [240/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:53.011 [241/743] Linking target lib/librte_rcu.so.23.0 00:02:53.269 [242/743] Linking target lib/librte_mempool.so.23.0 00:02:53.269 [243/743] Linking target lib/librte_timer.so.23.0 00:02:53.269 [244/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.269 [245/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:53.269 [246/743] Linking static target lib/librte_bpf.a 00:02:53.269 [247/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.269 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:53.269 [249/743] Linking target lib/librte_acl.so.23.0 00:02:53.269 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:53.269 [251/743] Linking static target lib/librte_compressdev.a 00:02:53.269 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:53.269 [253/743] Linking target lib/librte_cfgfile.so.23.0 00:02:53.269 [254/743] Linking target lib/librte_mbuf.so.23.0 00:02:53.269 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:53.528 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:53.528 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:53.528 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:53.528 [259/743] Linking target lib/librte_bbdev.so.23.0 00:02:53.528 [260/743] Linking target lib/librte_net.so.23.0 00:02:53.528 [261/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.528 [262/743] Generating lib/rte_efd_def with a custom command 00:02:53.528 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:53.528 [264/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:53.528 [265/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.786 [266/743] Linking target lib/librte_cmdline.so.23.0 00:02:53.786 [267/743] Linking target lib/librte_hash.so.23.0 00:02:53.786 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:53.786 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:54.044 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:54.044 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:54.044 [272/743] Linking static target lib/librte_distributor.a 00:02:54.044 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.302 [274/743] Linking target lib/librte_compressdev.so.23.0 00:02:54.302 [275/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.302 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:54.302 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:54.302 [278/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.302 [279/743] Linking target lib/librte_distributor.so.23.0 00:02:54.302 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:54.560 [281/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:54.560 [282/743] Linking target lib/librte_metrics.so.23.0 00:02:54.560 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:54.560 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:54.560 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:54.560 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:54.560 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:54.560 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:54.560 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:54.819 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:54.819 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:55.078 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:55.078 [293/743] Linking static target lib/librte_efd.a 00:02:55.336 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:55.336 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.336 [296/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.336 [297/743] Linking static target lib/librte_cryptodev.a 00:02:55.336 [298/743] Linking target lib/librte_efd.so.23.0 00:02:55.595 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:55.595 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:55.595 [301/743] Generating lib/rte_gro_def with a custom command 00:02:55.595 [302/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:55.595 [303/743] Linking static target lib/librte_gpudev.a 00:02:55.595 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:55.595 [305/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:55.595 [306/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:55.852 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:56.111 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:56.369 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:56.369 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:56.369 [311/743] Generating lib/rte_gso_def with a custom command 00:02:56.369 [312/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:56.369 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:56.369 [314/743] Generating lib/rte_gso_mingw with a custom command 00:02:56.369 [315/743] Linking static target lib/librte_gro.a 00:02:56.369 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.369 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:56.628 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:56.628 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:56.628 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.628 [321/743] Linking target lib/librte_gro.so.23.0 00:02:56.886 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:56.886 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:56.886 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:56.886 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:56.886 [326/743] Linking static target lib/librte_eventdev.a 00:02:56.886 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:56.886 [328/743] Linking static target lib/librte_gso.a 00:02:56.886 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:56.886 [330/743] Linking static target lib/librte_jobstats.a 00:02:56.886 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:57.144 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:57.144 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:57.144 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.144 [335/743] Linking target lib/librte_gso.so.23.0 00:02:57.144 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:57.144 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:57.144 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:57.144 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:57.402 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:57.402 [341/743] Generating lib/rte_lpm_mingw with a custom command 00:02:57.402 [342/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:57.402 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.402 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:57.402 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:57.402 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:57.402 [347/743] Linking static target lib/librte_ip_frag.a 00:02:57.402 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.660 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:57.660 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:57.660 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.919 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:57.919 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:57.919 [354/743] Linking static target lib/librte_latencystats.a 00:02:57.919 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:57.919 [356/743] Generating lib/rte_member_def with a custom command 00:02:57.919 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:58.177 [358/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:58.177 [359/743] Generating lib/rte_pcapng_def with a custom command 00:02:58.177 [360/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:58.177 [361/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:58.177 [362/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:58.177 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:58.177 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.177 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.177 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:58.177 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.435 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.435 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:58.435 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.693 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:58.693 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:58.693 [373/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:58.693 [374/743] Linking static target lib/librte_lpm.a 00:02:58.693 [375/743] Generating lib/rte_power_def with a custom command 00:02:58.693 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:58.693 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.952 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:58.952 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.952 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:58.952 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:58.952 [382/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:58.952 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:58.952 [384/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.952 [385/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:58.952 [386/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:58.952 [387/743] Linking static target lib/librte_pcapng.a 00:02:58.952 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:59.210 [389/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.210 [390/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:59.210 [391/743] Linking target lib/librte_lpm.so.23.0 00:02:59.210 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:59.210 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.211 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:59.211 [395/743] Linking static target lib/librte_rawdev.a 00:02:59.211 [396/743] Generating lib/rte_rib_def with a custom command 00:02:59.211 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:59.211 [398/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:59.211 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:59.211 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:59.468 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.468 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:59.468 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.468 [404/743] Linking static target lib/librte_dmadev.a 00:02:59.468 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.468 [406/743] Linking static target lib/librte_power.a 00:02:59.468 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:59.757 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.757 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:59.757 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:59.757 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:59.757 [412/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:59.757 [413/743] Linking static target lib/librte_regexdev.a 00:02:59.757 [414/743] Generating lib/rte_sched_def with a custom command 00:02:59.757 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:59.757 [416/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:59.757 [417/743] Generating lib/rte_sched_mingw with a custom command 00:02:59.757 [418/743] Linking static target lib/librte_member.a 00:02:59.757 [419/743] Generating lib/rte_security_def with a custom command 00:03:00.039 [420/743] Generating lib/rte_security_mingw with a custom command 00:03:00.039 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:00.039 [422/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:00.039 [423/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.039 [424/743] Linking target lib/librte_dmadev.so.23.0 00:03:00.039 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:00.039 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.039 [427/743] Linking static target lib/librte_reorder.a 00:03:00.039 [428/743] Generating lib/rte_stack_def with a custom command 00:03:00.039 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:00.039 [430/743] Linking static target lib/librte_stack.a 00:03:00.039 [431/743] Generating lib/rte_stack_mingw with a custom command 00:03:00.039 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.299 [433/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:00.299 [434/743] Linking target lib/librte_member.so.23.0 00:03:00.299 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.299 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.299 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.299 [438/743] Linking target lib/librte_stack.so.23.0 00:03:00.299 [439/743] Linking target lib/librte_reorder.so.23.0 00:03:00.299 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:00.299 [441/743] Linking static target lib/librte_rib.a 00:03:00.559 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.559 [443/743] Linking target lib/librte_power.so.23.0 00:03:00.559 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.559 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:00.817 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.817 [447/743] Linking static target lib/librte_security.a 00:03:00.817 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.817 [449/743] Linking target lib/librte_rib.so.23.0 00:03:01.075 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:01.075 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.075 [452/743] Generating lib/rte_vhost_def with a custom command 00:03:01.075 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:03:01.075 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.075 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.075 [456/743] Linking target lib/librte_security.so.23.0 00:03:01.075 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.334 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:01.334 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:01.334 [460/743] Linking static target lib/librte_sched.a 00:03:01.900 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.900 [462/743] Linking target lib/librte_sched.so.23.0 00:03:01.900 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:01.900 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:01.900 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.900 [466/743] Generating lib/rte_ipsec_def with a custom command 00:03:01.900 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:01.900 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:02.159 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.159 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:02.159 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:02.418 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:02.685 [473/743] Generating lib/rte_fib_def with a custom command 00:03:02.686 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:02.686 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:02.686 [476/743] Generating lib/rte_fib_mingw with a custom command 00:03:02.686 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:02.686 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:02.686 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:02.945 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:02.945 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:02.945 [482/743] Linking static target lib/librte_ipsec.a 00:03:03.203 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.203 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:03.461 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:03.461 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:03.461 [487/743] Linking static target lib/librte_fib.a 00:03:03.461 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:03.461 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:03.720 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:03.720 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:03.720 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.720 [493/743] Linking target lib/librte_fib.so.23.0 00:03:03.979 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:04.544 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:04.544 [496/743] Generating lib/rte_port_def with a custom command 00:03:04.544 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:04.544 [498/743] Generating lib/rte_port_mingw with a custom command 00:03:04.544 [499/743] Generating lib/rte_pdump_def with a custom command 00:03:04.544 [500/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:04.544 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:03:04.544 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:04.803 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:04.803 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:04.803 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:05.061 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:05.061 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:05.061 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:05.061 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:05.061 [510/743] Linking static target lib/librte_port.a 00:03:05.627 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:05.627 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:05.627 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.627 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:05.627 [515/743] Linking target lib/librte_port.so.23.0 00:03:05.627 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:05.886 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:05.886 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:05.886 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:05.886 [520/743] Linking static target lib/librte_pdump.a 00:03:06.144 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.144 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:06.403 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:06.403 [524/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:06.403 [525/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:06.403 [526/743] Generating lib/rte_table_def with a custom command 00:03:06.403 [527/743] Generating lib/rte_table_mingw with a custom command 00:03:06.660 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:06.918 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:06.918 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:06.918 [531/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.918 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:06.918 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:06.919 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:07.176 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:07.176 [536/743] Linking static target lib/librte_table.a 00:03:07.176 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:07.742 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:07.742 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:07.742 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.742 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:07.742 [542/743] Linking target lib/librte_table.so.23.0 00:03:07.742 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:08.000 [544/743] Generating lib/rte_graph_def with a custom command 00:03:08.000 [545/743] Generating lib/rte_graph_mingw with a custom command 00:03:08.000 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:08.258 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:08.258 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:08.517 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:08.517 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:08.517 [551/743] Linking static target lib/librte_graph.a 00:03:08.517 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:08.776 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:08.776 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:08.776 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:09.343 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:09.343 [557/743] Generating lib/rte_node_def with a custom command 00:03:09.343 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:09.343 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.343 [560/743] Linking target lib/librte_graph.so.23.0 00:03:09.343 [561/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:09.343 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:09.343 [563/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:09.602 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:09.602 [565/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:09.602 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:09.602 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:09.602 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:09.602 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:09.602 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:09.602 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:09.860 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:09.860 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.860 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:09.860 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:09.860 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:09.860 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:09.860 [578/743] Linking static target lib/librte_node.a 00:03:09.860 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:09.860 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:09.860 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.119 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.119 [583/743] Linking target lib/librte_node.so.23.0 00:03:10.119 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.119 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.119 [586/743] Linking static target drivers/librte_bus_vdev.a 00:03:10.119 [587/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:10.119 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:10.378 [589/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:10.378 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.378 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.378 [592/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.378 [593/743] Linking static target drivers/librte_bus_pci.a 00:03:10.378 [594/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.378 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:10.636 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:10.894 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.894 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:10.894 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:10.894 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:10.894 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:10.894 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:11.158 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:11.158 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:11.158 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:11.158 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.158 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:11.158 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.158 [609/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:11.439 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:11.727 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:11.986 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:12.244 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:12.244 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:12.503 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:12.762 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:12.762 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:13.021 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:13.280 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:13.280 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:13.539 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:13.539 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:13.539 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:13.539 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:13.539 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:14.473 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:14.731 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:14.990 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:14.990 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:14.990 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:14.990 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:14.990 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:14.990 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:15.248 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:15.507 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:15.507 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:15.765 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:15.765 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:15.765 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:16.023 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:16.023 [641/743] Linking static target lib/librte_vhost.a 00:03:16.023 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:16.281 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:16.281 [644/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:16.281 [645/743] Linking static target drivers/librte_net_i40e.a 00:03:16.281 [646/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:16.281 [647/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:16.539 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:16.540 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:16.797 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:16.797 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.797 [652/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:17.056 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:17.056 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:17.314 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:17.314 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:17.314 [657/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.314 [658/743] Linking target lib/librte_vhost.so.23.0 00:03:17.573 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:17.831 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:17.831 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:17.831 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:17.831 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:18.089 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:18.089 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:18.089 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:18.089 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:18.089 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:18.347 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:18.605 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:18.863 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:18.863 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:18.863 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:19.430 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:19.430 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:19.689 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:19.689 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:19.947 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:19.947 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:20.205 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:20.205 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:20.205 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:20.463 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:20.463 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:20.463 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:20.721 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:20.721 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:20.721 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:20.980 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:20.980 [690/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:21.238 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:21.238 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:21.238 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:21.238 [694/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:21.804 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:21.804 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:21.804 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:22.063 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:22.063 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:22.321 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.321 [701/743] Linking static target lib/librte_pipeline.a 00:03:22.579 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:22.579 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:22.579 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:22.839 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:23.098 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:23.098 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:23.098 [708/743] Linking target app/dpdk-dumpcap 00:03:23.098 [709/743] Linking target app/dpdk-proc-info 00:03:23.098 [710/743] Linking target app/dpdk-pdump 00:03:23.356 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:23.356 [712/743] Linking target app/dpdk-test-acl 00:03:23.356 [713/743] Linking target app/dpdk-test-bbdev 00:03:23.356 [714/743] Linking target app/dpdk-test-cmdline 00:03:23.615 [715/743] Linking target app/dpdk-test-crypto-perf 00:03:23.615 [716/743] Linking target app/dpdk-test-compress-perf 00:03:23.615 [717/743] Linking target app/dpdk-test-eventdev 00:03:23.873 [718/743] Linking target app/dpdk-test-fib 00:03:23.873 [719/743] Linking target app/dpdk-test-flow-perf 00:03:23.873 [720/743] Linking target app/dpdk-test-gpudev 00:03:23.873 [721/743] Linking target app/dpdk-test-pipeline 00:03:24.440 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:24.440 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:24.440 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:24.698 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:24.698 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:24.698 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:24.957 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.957 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:25.216 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:25.216 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:25.474 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:25.474 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:25.474 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:25.474 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:25.732 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:25.732 [737/743] Linking target app/dpdk-test-sad 00:03:25.990 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:25.990 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:25.990 [740/743] Linking target app/dpdk-test-regex 00:03:26.247 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.505 [742/743] Linking target app/dpdk-testpmd 00:03:26.505 [743/743] Linking target app/dpdk-test-security-perf 00:03:26.505 18:12:24 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:26.764 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:26.764 [0/1] Installing files. 00:03:27.027 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.031 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.032 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.032 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.294 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.295 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.295 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.295 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.295 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.295 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.557 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.558 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.558 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:27.558 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:27.558 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:27.559 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:27.559 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:27.559 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:27.559 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:27.559 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:27.559 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:27.559 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:27.559 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:27.559 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:27.559 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:27.559 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:27.559 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:27.559 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:27.559 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:27.559 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:27.559 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:27.559 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:27.559 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:27.559 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:27.559 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:27.559 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:27.559 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:27.559 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:27.559 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:27.559 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:27.559 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:27.559 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:27.559 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:27.559 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:27.559 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:27.559 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:27.559 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:27.559 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:27.559 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:27.559 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:27.559 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:27.559 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:27.559 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:27.559 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:27.559 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:27.559 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:27.559 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:27.559 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:27.559 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:27.559 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:27.559 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:27.559 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:27.559 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:27.559 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:27.559 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:27.559 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:27.559 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:27.559 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:27.559 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:27.559 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:27.559 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:27.559 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:27.559 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:27.559 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:27.559 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:27.559 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:27.559 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:27.559 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:27.559 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:27.559 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:27.559 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:27.559 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:27.559 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:27.559 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:27.559 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:27.559 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:27.559 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:27.559 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:27.559 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:27.559 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:27.559 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:27.559 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:27.559 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:27.559 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:27.559 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:27.559 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:27.559 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:27.559 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:27.559 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:27.559 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:27.559 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:27.559 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:27.559 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:27.559 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:27.559 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:27.559 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:27.559 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:27.559 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:27.559 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:27.559 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:27.559 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:27.559 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:27.559 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:27.559 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:27.559 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:27.559 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:27.559 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:27.559 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:27.559 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:27.559 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:27.559 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:27.559 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:27.559 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:27.559 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:27.560 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:27.560 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:27.560 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:27.560 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:27.560 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:27.560 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:27.560 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:27.560 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:27.560 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:27.560 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:27.560 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:27.560 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:27.560 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:27.560 18:12:25 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:27.560 18:12:25 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:27.560 18:12:25 -- common/autobuild_common.sh@203 -- $ cat 00:03:27.560 18:12:25 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.560 00:03:27.560 real 0m51.519s 00:03:27.560 user 6m9.160s 00:03:27.560 sys 0m55.589s 00:03:27.560 18:12:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:27.560 ************************************ 00:03:27.560 END TEST build_native_dpdk 00:03:27.560 ************************************ 00:03:27.560 18:12:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.560 18:12:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.560 18:12:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.560 18:12:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.560 18:12:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.560 18:12:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.560 18:12:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.560 18:12:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.560 18:12:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:27.560 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:27.818 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.818 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:27.818 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.080 Using 'verbs' RDMA provider 00:03:41.274 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:56.156 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:56.156 Creating mk/config.mk...done. 00:03:56.156 Creating mk/cc.flags.mk...done. 00:03:56.156 Type 'make' to build. 00:03:56.156 18:12:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:56.156 18:12:52 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:56.156 18:12:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:56.156 18:12:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.156 ************************************ 00:03:56.156 START TEST make 00:03:56.156 ************************************ 00:03:56.156 18:12:52 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:56.156 make[1]: Nothing to be done for 'all'. 00:04:18.098 CC lib/ut_mock/mock.o 00:04:18.098 CC lib/log/log.o 00:04:18.098 CC lib/log/log_flags.o 00:04:18.098 CC lib/log/log_deprecated.o 00:04:18.098 CC lib/ut/ut.o 00:04:18.098 LIB libspdk_ut_mock.a 00:04:18.098 LIB libspdk_log.a 00:04:18.098 SO libspdk_ut_mock.so.5.0 00:04:18.098 LIB libspdk_ut.a 00:04:18.098 SO libspdk_log.so.6.1 00:04:18.098 SO libspdk_ut.so.1.0 00:04:18.098 SYMLINK libspdk_ut_mock.so 00:04:18.098 SYMLINK libspdk_log.so 00:04:18.098 SYMLINK libspdk_ut.so 00:04:18.098 CXX lib/trace_parser/trace.o 00:04:18.098 CC lib/util/base64.o 00:04:18.098 CC lib/util/bit_array.o 00:04:18.098 CC lib/util/cpuset.o 00:04:18.098 CC lib/util/crc32.o 00:04:18.098 CC lib/util/crc32c.o 00:04:18.098 CC lib/dma/dma.o 00:04:18.098 CC lib/util/crc16.o 00:04:18.098 CC lib/ioat/ioat.o 00:04:18.098 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.098 CC lib/util/crc32_ieee.o 00:04:18.098 CC lib/util/crc64.o 00:04:18.098 CC lib/util/dif.o 00:04:18.098 CC lib/util/fd.o 00:04:18.098 LIB libspdk_dma.a 00:04:18.098 CC lib/util/file.o 00:04:18.098 SO libspdk_dma.so.3.0 00:04:18.098 CC lib/util/hexlify.o 00:04:18.098 SYMLINK libspdk_dma.so 00:04:18.098 CC lib/util/iov.o 00:04:18.098 CC lib/util/math.o 00:04:18.098 LIB libspdk_ioat.a 00:04:18.098 CC lib/util/pipe.o 00:04:18.098 CC lib/vfio_user/host/vfio_user.o 00:04:18.098 SO libspdk_ioat.so.6.0 00:04:18.098 CC lib/util/strerror_tls.o 00:04:18.098 CC lib/util/string.o 00:04:18.098 SYMLINK libspdk_ioat.so 00:04:18.098 CC lib/util/uuid.o 00:04:18.098 CC lib/util/fd_group.o 00:04:18.098 CC lib/util/xor.o 00:04:18.098 CC lib/util/zipf.o 00:04:18.098 LIB libspdk_vfio_user.a 00:04:18.098 SO libspdk_vfio_user.so.4.0 00:04:18.357 SYMLINK libspdk_vfio_user.so 00:04:18.357 LIB libspdk_util.a 00:04:18.357 SO libspdk_util.so.8.0 00:04:18.615 SYMLINK libspdk_util.so 00:04:18.615 CC lib/conf/conf.o 00:04:18.616 CC lib/rdma/common.o 00:04:18.616 CC lib/env_dpdk/env.o 00:04:18.616 LIB libspdk_trace_parser.a 00:04:18.616 CC lib/env_dpdk/memory.o 00:04:18.616 CC lib/env_dpdk/pci.o 00:04:18.616 CC lib/rdma/rdma_verbs.o 00:04:18.616 CC lib/idxd/idxd.o 00:04:18.616 CC lib/vmd/vmd.o 00:04:18.874 CC lib/json/json_parse.o 00:04:18.874 SO libspdk_trace_parser.so.4.0 00:04:18.874 SYMLINK libspdk_trace_parser.so 00:04:18.874 CC lib/json/json_util.o 00:04:18.874 LIB libspdk_conf.a 00:04:18.874 CC lib/env_dpdk/init.o 00:04:18.874 SO libspdk_conf.so.5.0 00:04:19.132 CC lib/vmd/led.o 00:04:19.132 LIB libspdk_rdma.a 00:04:19.132 SYMLINK libspdk_conf.so 00:04:19.132 CC lib/env_dpdk/threads.o 00:04:19.132 SO libspdk_rdma.so.5.0 00:04:19.132 CC lib/json/json_write.o 00:04:19.132 CC lib/env_dpdk/pci_ioat.o 00:04:19.132 SYMLINK libspdk_rdma.so 00:04:19.132 CC lib/idxd/idxd_user.o 00:04:19.132 CC lib/idxd/idxd_kernel.o 00:04:19.132 CC lib/env_dpdk/pci_virtio.o 00:04:19.132 CC lib/env_dpdk/pci_vmd.o 00:04:19.132 CC lib/env_dpdk/pci_idxd.o 00:04:19.390 CC lib/env_dpdk/pci_event.o 00:04:19.390 CC lib/env_dpdk/sigbus_handler.o 00:04:19.390 CC lib/env_dpdk/pci_dpdk.o 00:04:19.390 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.390 LIB libspdk_idxd.a 00:04:19.390 LIB libspdk_vmd.a 00:04:19.390 LIB libspdk_json.a 00:04:19.390 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.390 SO libspdk_vmd.so.5.0 00:04:19.390 SO libspdk_idxd.so.11.0 00:04:19.390 SO libspdk_json.so.5.1 00:04:19.390 SYMLINK libspdk_vmd.so 00:04:19.390 SYMLINK libspdk_json.so 00:04:19.390 SYMLINK libspdk_idxd.so 00:04:19.649 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.649 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.649 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.649 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.907 LIB libspdk_jsonrpc.a 00:04:19.907 SO libspdk_jsonrpc.so.5.1 00:04:20.165 SYMLINK libspdk_jsonrpc.so 00:04:20.165 LIB libspdk_env_dpdk.a 00:04:20.165 CC lib/rpc/rpc.o 00:04:20.424 SO libspdk_env_dpdk.so.13.0 00:04:20.424 LIB libspdk_rpc.a 00:04:20.424 SYMLINK libspdk_env_dpdk.so 00:04:20.424 SO libspdk_rpc.so.5.0 00:04:20.424 SYMLINK libspdk_rpc.so 00:04:20.684 CC lib/notify/notify.o 00:04:20.684 CC lib/notify/notify_rpc.o 00:04:20.684 CC lib/trace/trace.o 00:04:20.684 CC lib/trace/trace_rpc.o 00:04:20.684 CC lib/trace/trace_flags.o 00:04:20.684 CC lib/sock/sock.o 00:04:20.684 CC lib/sock/sock_rpc.o 00:04:20.943 LIB libspdk_notify.a 00:04:20.943 SO libspdk_notify.so.5.0 00:04:20.943 SYMLINK libspdk_notify.so 00:04:20.943 LIB libspdk_trace.a 00:04:20.943 SO libspdk_trace.so.9.0 00:04:21.201 SYMLINK libspdk_trace.so 00:04:21.201 LIB libspdk_sock.a 00:04:21.201 SO libspdk_sock.so.8.0 00:04:21.201 SYMLINK libspdk_sock.so 00:04:21.201 CC lib/thread/thread.o 00:04:21.201 CC lib/thread/iobuf.o 00:04:21.459 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:21.459 CC lib/nvme/nvme_ctrlr.o 00:04:21.459 CC lib/nvme/nvme_fabric.o 00:04:21.459 CC lib/nvme/nvme_ns_cmd.o 00:04:21.459 CC lib/nvme/nvme_ns.o 00:04:21.459 CC lib/nvme/nvme_pcie_common.o 00:04:21.459 CC lib/nvme/nvme_pcie.o 00:04:21.459 CC lib/nvme/nvme_qpair.o 00:04:21.717 CC lib/nvme/nvme.o 00:04:22.282 CC lib/nvme/nvme_quirks.o 00:04:22.282 CC lib/nvme/nvme_transport.o 00:04:22.282 CC lib/nvme/nvme_discovery.o 00:04:22.282 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.282 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.282 CC lib/nvme/nvme_tcp.o 00:04:22.542 CC lib/nvme/nvme_opal.o 00:04:22.542 CC lib/nvme/nvme_io_msg.o 00:04:22.808 CC lib/nvme/nvme_poll_group.o 00:04:22.808 LIB libspdk_thread.a 00:04:23.066 SO libspdk_thread.so.9.0 00:04:23.066 CC lib/nvme/nvme_zns.o 00:04:23.067 CC lib/nvme/nvme_cuse.o 00:04:23.067 CC lib/nvme/nvme_vfio_user.o 00:04:23.067 SYMLINK libspdk_thread.so 00:04:23.067 CC lib/nvme/nvme_rdma.o 00:04:23.325 CC lib/accel/accel.o 00:04:23.325 CC lib/blob/blobstore.o 00:04:23.325 CC lib/blob/request.o 00:04:23.584 CC lib/blob/zeroes.o 00:04:23.584 CC lib/blob/blob_bs_dev.o 00:04:23.584 CC lib/accel/accel_rpc.o 00:04:23.843 CC lib/init/json_config.o 00:04:23.843 CC lib/virtio/virtio.o 00:04:23.843 CC lib/virtio/virtio_vhost_user.o 00:04:23.843 CC lib/virtio/virtio_vfio_user.o 00:04:23.843 CC lib/virtio/virtio_pci.o 00:04:23.843 CC lib/init/subsystem.o 00:04:24.102 CC lib/init/subsystem_rpc.o 00:04:24.102 CC lib/init/rpc.o 00:04:24.102 CC lib/accel/accel_sw.o 00:04:24.102 LIB libspdk_init.a 00:04:24.361 LIB libspdk_virtio.a 00:04:24.361 SO libspdk_init.so.4.0 00:04:24.361 SO libspdk_virtio.so.6.0 00:04:24.361 SYMLINK libspdk_init.so 00:04:24.361 SYMLINK libspdk_virtio.so 00:04:24.361 LIB libspdk_accel.a 00:04:24.361 SO libspdk_accel.so.14.0 00:04:24.621 CC lib/event/app.o 00:04:24.621 CC lib/event/reactor.o 00:04:24.621 CC lib/event/log_rpc.o 00:04:24.621 CC lib/event/scheduler_static.o 00:04:24.621 CC lib/event/app_rpc.o 00:04:24.621 LIB libspdk_nvme.a 00:04:24.621 SYMLINK libspdk_accel.so 00:04:24.621 CC lib/bdev/bdev.o 00:04:24.621 CC lib/bdev/bdev_rpc.o 00:04:24.621 CC lib/bdev/bdev_zone.o 00:04:24.621 SO libspdk_nvme.so.12.0 00:04:24.880 CC lib/bdev/part.o 00:04:24.880 CC lib/bdev/scsi_nvme.o 00:04:24.880 LIB libspdk_event.a 00:04:24.880 SYMLINK libspdk_nvme.so 00:04:25.139 SO libspdk_event.so.12.0 00:04:25.139 SYMLINK libspdk_event.so 00:04:26.516 LIB libspdk_blob.a 00:04:26.516 SO libspdk_blob.so.10.1 00:04:26.516 SYMLINK libspdk_blob.so 00:04:26.775 CC lib/lvol/lvol.o 00:04:26.775 CC lib/blobfs/tree.o 00:04:26.775 CC lib/blobfs/blobfs.o 00:04:27.712 LIB libspdk_bdev.a 00:04:27.712 LIB libspdk_blobfs.a 00:04:27.712 SO libspdk_bdev.so.14.0 00:04:27.712 SO libspdk_blobfs.so.9.0 00:04:27.712 LIB libspdk_lvol.a 00:04:27.712 SO libspdk_lvol.so.9.1 00:04:27.712 SYMLINK libspdk_blobfs.so 00:04:27.712 SYMLINK libspdk_bdev.so 00:04:27.712 SYMLINK libspdk_lvol.so 00:04:27.970 CC lib/scsi/dev.o 00:04:27.970 CC lib/scsi/lun.o 00:04:27.970 CC lib/scsi/port.o 00:04:27.970 CC lib/ublk/ublk.o 00:04:27.970 CC lib/scsi/scsi.o 00:04:27.970 CC lib/ublk/ublk_rpc.o 00:04:27.970 CC lib/scsi/scsi_bdev.o 00:04:27.970 CC lib/nvmf/ctrlr.o 00:04:27.970 CC lib/nbd/nbd.o 00:04:27.970 CC lib/ftl/ftl_core.o 00:04:27.970 CC lib/scsi/scsi_pr.o 00:04:28.228 CC lib/nbd/nbd_rpc.o 00:04:28.228 CC lib/scsi/scsi_rpc.o 00:04:28.228 CC lib/scsi/task.o 00:04:28.228 CC lib/nvmf/ctrlr_discovery.o 00:04:28.228 CC lib/nvmf/ctrlr_bdev.o 00:04:28.228 CC lib/nvmf/subsystem.o 00:04:28.487 LIB libspdk_nbd.a 00:04:28.487 SO libspdk_nbd.so.6.0 00:04:28.487 CC lib/ftl/ftl_init.o 00:04:28.487 CC lib/ftl/ftl_layout.o 00:04:28.487 CC lib/nvmf/nvmf.o 00:04:28.487 SYMLINK libspdk_nbd.so 00:04:28.487 CC lib/nvmf/nvmf_rpc.o 00:04:28.487 LIB libspdk_scsi.a 00:04:28.487 SO libspdk_scsi.so.8.0 00:04:28.745 CC lib/nvmf/transport.o 00:04:28.745 SYMLINK libspdk_scsi.so 00:04:28.745 CC lib/ftl/ftl_debug.o 00:04:28.745 CC lib/nvmf/tcp.o 00:04:28.745 LIB libspdk_ublk.a 00:04:28.745 SO libspdk_ublk.so.2.0 00:04:28.745 SYMLINK libspdk_ublk.so 00:04:28.745 CC lib/ftl/ftl_io.o 00:04:29.003 CC lib/iscsi/conn.o 00:04:29.003 CC lib/iscsi/init_grp.o 00:04:29.003 CC lib/nvmf/rdma.o 00:04:29.262 CC lib/ftl/ftl_sb.o 00:04:29.262 CC lib/iscsi/iscsi.o 00:04:29.262 CC lib/iscsi/md5.o 00:04:29.262 CC lib/ftl/ftl_l2p.o 00:04:29.520 CC lib/ftl/ftl_l2p_flat.o 00:04:29.520 CC lib/iscsi/param.o 00:04:29.520 CC lib/iscsi/portal_grp.o 00:04:29.520 CC lib/ftl/ftl_nv_cache.o 00:04:29.520 CC lib/iscsi/tgt_node.o 00:04:29.520 CC lib/iscsi/iscsi_subsystem.o 00:04:29.520 CC lib/iscsi/iscsi_rpc.o 00:04:29.520 CC lib/iscsi/task.o 00:04:29.779 CC lib/ftl/ftl_band.o 00:04:29.779 CC lib/ftl/ftl_band_ops.o 00:04:29.779 CC lib/vhost/vhost.o 00:04:30.037 CC lib/vhost/vhost_rpc.o 00:04:30.037 CC lib/ftl/ftl_writer.o 00:04:30.037 CC lib/ftl/ftl_rq.o 00:04:30.295 CC lib/ftl/ftl_reloc.o 00:04:30.295 CC lib/ftl/ftl_l2p_cache.o 00:04:30.295 CC lib/vhost/vhost_scsi.o 00:04:30.295 CC lib/vhost/vhost_blk.o 00:04:30.295 CC lib/vhost/rte_vhost_user.o 00:04:30.553 CC lib/ftl/ftl_p2l.o 00:04:30.553 CC lib/ftl/mngt/ftl_mngt.o 00:04:30.553 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:30.553 LIB libspdk_iscsi.a 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:30.811 SO libspdk_iscsi.so.7.0 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:30.811 SYMLINK libspdk_iscsi.so 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:30.811 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:31.069 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:31.069 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:31.069 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:31.069 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:31.069 LIB libspdk_nvmf.a 00:04:31.327 CC lib/ftl/utils/ftl_conf.o 00:04:31.327 SO libspdk_nvmf.so.17.0 00:04:31.327 CC lib/ftl/utils/ftl_md.o 00:04:31.327 CC lib/ftl/utils/ftl_mempool.o 00:04:31.327 CC lib/ftl/utils/ftl_bitmap.o 00:04:31.327 CC lib/ftl/utils/ftl_property.o 00:04:31.327 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:31.585 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:31.585 SYMLINK libspdk_nvmf.so 00:04:31.585 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:31.585 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:31.585 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:31.585 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:31.585 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:31.585 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:31.844 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:31.844 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:31.844 LIB libspdk_vhost.a 00:04:31.844 CC lib/ftl/base/ftl_base_dev.o 00:04:31.844 CC lib/ftl/base/ftl_base_bdev.o 00:04:31.844 CC lib/ftl/ftl_trace.o 00:04:31.844 SO libspdk_vhost.so.7.1 00:04:31.844 SYMLINK libspdk_vhost.so 00:04:32.103 LIB libspdk_ftl.a 00:04:32.361 SO libspdk_ftl.so.8.0 00:04:32.620 SYMLINK libspdk_ftl.so 00:04:32.879 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.879 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.879 CC module/sock/uring/uring.o 00:04:32.879 CC module/sock/posix/posix.o 00:04:32.879 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.879 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.879 CC module/accel/error/accel_error.o 00:04:32.879 CC module/accel/dsa/accel_dsa.o 00:04:32.879 CC module/accel/ioat/accel_ioat.o 00:04:32.879 CC module/blob/bdev/blob_bdev.o 00:04:32.879 LIB libspdk_env_dpdk_rpc.a 00:04:32.879 SO libspdk_env_dpdk_rpc.so.5.0 00:04:32.879 LIB libspdk_scheduler_gscheduler.a 00:04:32.879 LIB libspdk_scheduler_dpdk_governor.a 00:04:32.879 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.879 SO libspdk_scheduler_gscheduler.so.3.0 00:04:32.879 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.879 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:33.137 LIB libspdk_scheduler_dynamic.a 00:04:33.137 CC module/accel/error/accel_error_rpc.o 00:04:33.137 CC module/accel/dsa/accel_dsa_rpc.o 00:04:33.137 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.137 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.137 SO libspdk_scheduler_dynamic.so.3.0 00:04:33.137 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.137 LIB libspdk_blob_bdev.a 00:04:33.137 SO libspdk_blob_bdev.so.10.1 00:04:33.137 LIB libspdk_accel_ioat.a 00:04:33.137 CC module/accel/iaa/accel_iaa.o 00:04:33.137 CC module/accel/iaa/accel_iaa_rpc.o 00:04:33.137 SO libspdk_accel_ioat.so.5.0 00:04:33.137 LIB libspdk_accel_error.a 00:04:33.137 LIB libspdk_accel_dsa.a 00:04:33.137 SYMLINK libspdk_blob_bdev.so 00:04:33.137 SO libspdk_accel_error.so.1.0 00:04:33.137 SO libspdk_accel_dsa.so.4.0 00:04:33.137 SYMLINK libspdk_accel_ioat.so 00:04:33.396 SYMLINK libspdk_accel_error.so 00:04:33.396 SYMLINK libspdk_accel_dsa.so 00:04:33.396 LIB libspdk_accel_iaa.a 00:04:33.396 CC module/bdev/gpt/gpt.o 00:04:33.396 CC module/bdev/delay/vbdev_delay.o 00:04:33.396 CC module/bdev/error/vbdev_error.o 00:04:33.396 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.396 SO libspdk_accel_iaa.so.2.0 00:04:33.396 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.396 CC module/bdev/malloc/bdev_malloc.o 00:04:33.396 CC module/bdev/null/bdev_null.o 00:04:33.396 SYMLINK libspdk_accel_iaa.so 00:04:33.396 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.655 LIB libspdk_sock_uring.a 00:04:33.655 SO libspdk_sock_uring.so.4.0 00:04:33.655 LIB libspdk_sock_posix.a 00:04:33.655 SO libspdk_sock_posix.so.5.0 00:04:33.655 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.655 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.655 SYMLINK libspdk_sock_uring.so 00:04:33.655 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.655 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.655 SYMLINK libspdk_sock_posix.so 00:04:33.655 CC module/bdev/null/bdev_null_rpc.o 00:04:33.913 CC module/bdev/nvme/bdev_nvme.o 00:04:33.914 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.914 LIB libspdk_blobfs_bdev.a 00:04:33.914 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.914 LIB libspdk_bdev_malloc.a 00:04:33.914 SO libspdk_blobfs_bdev.so.5.0 00:04:33.914 LIB libspdk_bdev_error.a 00:04:33.914 SO libspdk_bdev_malloc.so.5.0 00:04:33.914 SO libspdk_bdev_error.so.5.0 00:04:33.914 LIB libspdk_bdev_gpt.a 00:04:33.914 SYMLINK libspdk_blobfs_bdev.so 00:04:33.914 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.914 CC module/bdev/nvme/nvme_rpc.o 00:04:33.914 LIB libspdk_bdev_null.a 00:04:33.914 SYMLINK libspdk_bdev_malloc.so 00:04:33.914 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.914 SO libspdk_bdev_gpt.so.5.0 00:04:33.914 SO libspdk_bdev_null.so.5.0 00:04:33.914 SYMLINK libspdk_bdev_error.so 00:04:33.914 LIB libspdk_bdev_lvol.a 00:04:33.914 LIB libspdk_bdev_delay.a 00:04:33.914 SO libspdk_bdev_lvol.so.5.0 00:04:33.914 SO libspdk_bdev_delay.so.5.0 00:04:34.172 SYMLINK libspdk_bdev_gpt.so 00:04:34.172 SYMLINK libspdk_bdev_null.so 00:04:34.172 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:34.172 SYMLINK libspdk_bdev_lvol.so 00:04:34.172 SYMLINK libspdk_bdev_delay.so 00:04:34.172 CC module/bdev/raid/bdev_raid.o 00:04:34.172 CC module/bdev/nvme/vbdev_opal.o 00:04:34.172 CC module/bdev/split/vbdev_split.o 00:04:34.172 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.172 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.172 CC module/bdev/uring/bdev_uring.o 00:04:34.172 LIB libspdk_bdev_passthru.a 00:04:34.172 SO libspdk_bdev_passthru.so.5.0 00:04:34.172 CC module/bdev/aio/bdev_aio.o 00:04:34.433 SYMLINK libspdk_bdev_passthru.so 00:04:34.433 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.433 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.433 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.433 CC module/bdev/raid/bdev_raid_rpc.o 00:04:34.433 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.433 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.433 LIB libspdk_bdev_zone_block.a 00:04:34.697 CC module/bdev/uring/bdev_uring_rpc.o 00:04:34.697 CC module/bdev/raid/raid0.o 00:04:34.697 LIB libspdk_bdev_split.a 00:04:34.697 SO libspdk_bdev_zone_block.so.5.0 00:04:34.697 LIB libspdk_bdev_aio.a 00:04:34.697 SO libspdk_bdev_split.so.5.0 00:04:34.697 SO libspdk_bdev_aio.so.5.0 00:04:34.697 CC module/bdev/raid/raid1.o 00:04:34.697 SYMLINK libspdk_bdev_split.so 00:04:34.697 SYMLINK libspdk_bdev_zone_block.so 00:04:34.697 SYMLINK libspdk_bdev_aio.so 00:04:34.697 CC module/bdev/raid/concat.o 00:04:34.697 LIB libspdk_bdev_uring.a 00:04:34.697 SO libspdk_bdev_uring.so.5.0 00:04:34.954 CC module/bdev/ftl/bdev_ftl.o 00:04:34.954 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.954 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.954 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.954 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.954 SYMLINK libspdk_bdev_uring.so 00:04:34.954 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.954 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.954 LIB libspdk_bdev_raid.a 00:04:34.954 SO libspdk_bdev_raid.so.5.0 00:04:35.212 LIB libspdk_bdev_ftl.a 00:04:35.212 SYMLINK libspdk_bdev_raid.so 00:04:35.212 SO libspdk_bdev_ftl.so.5.0 00:04:35.212 LIB libspdk_bdev_iscsi.a 00:04:35.212 SYMLINK libspdk_bdev_ftl.so 00:04:35.212 SO libspdk_bdev_iscsi.so.5.0 00:04:35.212 SYMLINK libspdk_bdev_iscsi.so 00:04:35.471 LIB libspdk_bdev_virtio.a 00:04:35.471 SO libspdk_bdev_virtio.so.5.0 00:04:35.471 SYMLINK libspdk_bdev_virtio.so 00:04:36.408 LIB libspdk_bdev_nvme.a 00:04:36.408 SO libspdk_bdev_nvme.so.6.0 00:04:36.408 SYMLINK libspdk_bdev_nvme.so 00:04:36.667 CC module/event/subsystems/scheduler/scheduler.o 00:04:36.667 CC module/event/subsystems/vmd/vmd.o 00:04:36.667 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:36.667 CC module/event/subsystems/sock/sock.o 00:04:36.667 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:36.667 CC module/event/subsystems/iobuf/iobuf.o 00:04:36.667 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:36.926 LIB libspdk_event_scheduler.a 00:04:36.926 LIB libspdk_event_sock.a 00:04:36.926 LIB libspdk_event_iobuf.a 00:04:36.926 SO libspdk_event_sock.so.4.0 00:04:36.926 SO libspdk_event_scheduler.so.3.0 00:04:36.926 LIB libspdk_event_vhost_blk.a 00:04:36.926 LIB libspdk_event_vmd.a 00:04:36.926 SO libspdk_event_iobuf.so.2.0 00:04:36.926 SO libspdk_event_vhost_blk.so.2.0 00:04:36.926 SO libspdk_event_vmd.so.5.0 00:04:36.926 SYMLINK libspdk_event_sock.so 00:04:36.926 SYMLINK libspdk_event_scheduler.so 00:04:36.926 SYMLINK libspdk_event_iobuf.so 00:04:36.926 SYMLINK libspdk_event_vhost_blk.so 00:04:36.926 SYMLINK libspdk_event_vmd.so 00:04:37.185 CC module/event/subsystems/accel/accel.o 00:04:37.444 LIB libspdk_event_accel.a 00:04:37.444 SO libspdk_event_accel.so.5.0 00:04:37.444 SYMLINK libspdk_event_accel.so 00:04:37.704 CC module/event/subsystems/bdev/bdev.o 00:04:37.704 LIB libspdk_event_bdev.a 00:04:37.704 SO libspdk_event_bdev.so.5.0 00:04:37.962 SYMLINK libspdk_event_bdev.so 00:04:37.962 CC module/event/subsystems/scsi/scsi.o 00:04:37.962 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:37.962 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:37.962 CC module/event/subsystems/nbd/nbd.o 00:04:37.962 CC module/event/subsystems/ublk/ublk.o 00:04:38.221 LIB libspdk_event_nbd.a 00:04:38.222 LIB libspdk_event_ublk.a 00:04:38.222 LIB libspdk_event_scsi.a 00:04:38.222 SO libspdk_event_nbd.so.5.0 00:04:38.222 SO libspdk_event_ublk.so.2.0 00:04:38.222 SO libspdk_event_scsi.so.5.0 00:04:38.222 SYMLINK libspdk_event_ublk.so 00:04:38.222 SYMLINK libspdk_event_nbd.so 00:04:38.222 LIB libspdk_event_nvmf.a 00:04:38.222 SYMLINK libspdk_event_scsi.so 00:04:38.481 SO libspdk_event_nvmf.so.5.0 00:04:38.481 SYMLINK libspdk_event_nvmf.so 00:04:38.481 CC module/event/subsystems/iscsi/iscsi.o 00:04:38.481 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:38.740 LIB libspdk_event_vhost_scsi.a 00:04:38.740 LIB libspdk_event_iscsi.a 00:04:38.740 SO libspdk_event_vhost_scsi.so.2.0 00:04:38.740 SO libspdk_event_iscsi.so.5.0 00:04:38.740 SYMLINK libspdk_event_vhost_scsi.so 00:04:38.740 SYMLINK libspdk_event_iscsi.so 00:04:38.999 SO libspdk.so.5.0 00:04:38.999 SYMLINK libspdk.so 00:04:38.999 CC app/trace_record/trace_record.o 00:04:38.999 CXX app/trace/trace.o 00:04:38.999 CC app/iscsi_tgt/iscsi_tgt.o 00:04:38.999 CC app/nvmf_tgt/nvmf_main.o 00:04:39.258 CC examples/accel/perf/accel_perf.o 00:04:39.258 CC test/bdev/bdevio/bdevio.o 00:04:39.258 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.258 CC test/app/bdev_svc/bdev_svc.o 00:04:39.258 CC test/blobfs/mkfs/mkfs.o 00:04:39.258 CC test/accel/dif/dif.o 00:04:39.258 LINK spdk_trace_record 00:04:39.516 LINK iscsi_tgt 00:04:39.516 LINK nvmf_tgt 00:04:39.516 LINK bdev_svc 00:04:39.516 LINK hello_bdev 00:04:39.516 LINK spdk_trace 00:04:39.516 LINK mkfs 00:04:39.516 LINK bdevio 00:04:39.775 LINK accel_perf 00:04:39.775 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:39.775 CC test/app/histogram_perf/histogram_perf.o 00:04:39.775 CC test/app/jsoncat/jsoncat.o 00:04:39.775 LINK dif 00:04:39.775 CC test/app/stub/stub.o 00:04:39.775 TEST_HEADER include/spdk/accel.h 00:04:39.775 TEST_HEADER include/spdk/accel_module.h 00:04:39.775 TEST_HEADER include/spdk/assert.h 00:04:39.775 TEST_HEADER include/spdk/barrier.h 00:04:39.775 TEST_HEADER include/spdk/base64.h 00:04:39.775 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.775 TEST_HEADER include/spdk/bdev.h 00:04:39.775 TEST_HEADER include/spdk/bdev_module.h 00:04:39.775 TEST_HEADER include/spdk/bdev_zone.h 00:04:39.775 TEST_HEADER include/spdk/bit_array.h 00:04:39.775 TEST_HEADER include/spdk/bit_pool.h 00:04:39.775 TEST_HEADER include/spdk/blob_bdev.h 00:04:39.775 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:39.775 TEST_HEADER include/spdk/blobfs.h 00:04:39.775 TEST_HEADER include/spdk/blob.h 00:04:39.775 TEST_HEADER include/spdk/conf.h 00:04:39.775 TEST_HEADER include/spdk/config.h 00:04:39.775 TEST_HEADER include/spdk/cpuset.h 00:04:39.775 TEST_HEADER include/spdk/crc16.h 00:04:39.775 TEST_HEADER include/spdk/crc32.h 00:04:39.775 TEST_HEADER include/spdk/crc64.h 00:04:39.775 TEST_HEADER include/spdk/dif.h 00:04:39.775 LINK histogram_perf 00:04:39.775 TEST_HEADER include/spdk/dma.h 00:04:39.775 TEST_HEADER include/spdk/endian.h 00:04:39.775 CC app/spdk_tgt/spdk_tgt.o 00:04:39.775 TEST_HEADER include/spdk/env_dpdk.h 00:04:39.775 TEST_HEADER include/spdk/env.h 00:04:39.775 TEST_HEADER include/spdk/event.h 00:04:39.775 TEST_HEADER include/spdk/fd_group.h 00:04:39.775 LINK jsoncat 00:04:39.775 TEST_HEADER include/spdk/fd.h 00:04:39.775 TEST_HEADER include/spdk/file.h 00:04:39.775 TEST_HEADER include/spdk/ftl.h 00:04:39.775 TEST_HEADER include/spdk/gpt_spec.h 00:04:39.775 TEST_HEADER include/spdk/hexlify.h 00:04:39.775 TEST_HEADER include/spdk/histogram_data.h 00:04:39.775 TEST_HEADER include/spdk/idxd.h 00:04:39.775 TEST_HEADER include/spdk/idxd_spec.h 00:04:39.775 TEST_HEADER include/spdk/init.h 00:04:39.775 TEST_HEADER include/spdk/ioat.h 00:04:39.775 TEST_HEADER include/spdk/ioat_spec.h 00:04:39.775 TEST_HEADER include/spdk/iscsi_spec.h 00:04:39.775 TEST_HEADER include/spdk/json.h 00:04:39.775 TEST_HEADER include/spdk/jsonrpc.h 00:04:39.775 TEST_HEADER include/spdk/likely.h 00:04:40.033 TEST_HEADER include/spdk/log.h 00:04:40.033 TEST_HEADER include/spdk/lvol.h 00:04:40.033 TEST_HEADER include/spdk/memory.h 00:04:40.033 TEST_HEADER include/spdk/mmio.h 00:04:40.033 TEST_HEADER include/spdk/nbd.h 00:04:40.033 TEST_HEADER include/spdk/notify.h 00:04:40.033 TEST_HEADER include/spdk/nvme.h 00:04:40.033 TEST_HEADER include/spdk/nvme_intel.h 00:04:40.033 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:40.033 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:40.033 TEST_HEADER include/spdk/nvme_spec.h 00:04:40.033 TEST_HEADER include/spdk/nvme_zns.h 00:04:40.033 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:40.033 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:40.033 TEST_HEADER include/spdk/nvmf.h 00:04:40.033 TEST_HEADER include/spdk/nvmf_spec.h 00:04:40.033 TEST_HEADER include/spdk/nvmf_transport.h 00:04:40.033 LINK stub 00:04:40.033 TEST_HEADER include/spdk/opal.h 00:04:40.033 TEST_HEADER include/spdk/opal_spec.h 00:04:40.033 TEST_HEADER include/spdk/pci_ids.h 00:04:40.033 TEST_HEADER include/spdk/pipe.h 00:04:40.033 TEST_HEADER include/spdk/queue.h 00:04:40.033 TEST_HEADER include/spdk/reduce.h 00:04:40.033 TEST_HEADER include/spdk/rpc.h 00:04:40.033 TEST_HEADER include/spdk/scheduler.h 00:04:40.033 TEST_HEADER include/spdk/scsi.h 00:04:40.033 TEST_HEADER include/spdk/scsi_spec.h 00:04:40.033 TEST_HEADER include/spdk/sock.h 00:04:40.033 TEST_HEADER include/spdk/stdinc.h 00:04:40.033 TEST_HEADER include/spdk/string.h 00:04:40.033 TEST_HEADER include/spdk/thread.h 00:04:40.033 TEST_HEADER include/spdk/trace.h 00:04:40.033 TEST_HEADER include/spdk/trace_parser.h 00:04:40.033 CC test/dma/test_dma/test_dma.o 00:04:40.033 TEST_HEADER include/spdk/tree.h 00:04:40.033 TEST_HEADER include/spdk/ublk.h 00:04:40.033 TEST_HEADER include/spdk/util.h 00:04:40.033 TEST_HEADER include/spdk/uuid.h 00:04:40.033 TEST_HEADER include/spdk/version.h 00:04:40.033 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:40.033 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:40.033 TEST_HEADER include/spdk/vhost.h 00:04:40.033 TEST_HEADER include/spdk/vmd.h 00:04:40.033 TEST_HEADER include/spdk/xor.h 00:04:40.033 TEST_HEADER include/spdk/zipf.h 00:04:40.033 CXX test/cpp_headers/accel.o 00:04:40.033 CC test/env/mem_callbacks/mem_callbacks.o 00:04:40.033 CC test/event/event_perf/event_perf.o 00:04:40.033 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:40.033 LINK spdk_tgt 00:04:40.033 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:40.033 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:40.033 LINK nvme_fuzz 00:04:40.291 CXX test/cpp_headers/accel_module.o 00:04:40.291 LINK event_perf 00:04:40.291 LINK mem_callbacks 00:04:40.291 CC test/env/vtophys/vtophys.o 00:04:40.291 LINK test_dma 00:04:40.291 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:40.291 CC app/spdk_lspci/spdk_lspci.o 00:04:40.291 CXX test/cpp_headers/assert.o 00:04:40.549 CC test/env/memory/memory_ut.o 00:04:40.549 CC test/event/reactor/reactor.o 00:04:40.549 LINK vtophys 00:04:40.549 LINK spdk_lspci 00:04:40.549 LINK env_dpdk_post_init 00:04:40.549 CXX test/cpp_headers/barrier.o 00:04:40.549 LINK vhost_fuzz 00:04:40.549 LINK bdevperf 00:04:40.549 LINK reactor 00:04:40.549 CC test/env/pci/pci_ut.o 00:04:40.808 CXX test/cpp_headers/base64.o 00:04:40.808 CC app/spdk_nvme_perf/perf.o 00:04:40.808 CC examples/blob/hello_world/hello_blob.o 00:04:40.808 CC examples/blob/cli/blobcli.o 00:04:40.808 CC test/event/reactor_perf/reactor_perf.o 00:04:40.808 CXX test/cpp_headers/bdev.o 00:04:41.065 CC app/spdk_nvme_identify/identify.o 00:04:41.065 CC test/lvol/esnap/esnap.o 00:04:41.065 LINK memory_ut 00:04:41.065 LINK reactor_perf 00:04:41.065 LINK pci_ut 00:04:41.065 LINK hello_blob 00:04:41.065 CXX test/cpp_headers/bdev_module.o 00:04:41.323 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.323 CC test/event/app_repeat/app_repeat.o 00:04:41.323 CXX test/cpp_headers/bdev_zone.o 00:04:41.323 CC app/spdk_top/spdk_top.o 00:04:41.323 LINK blobcli 00:04:41.323 LINK spdk_nvme_discover 00:04:41.581 CC app/vhost/vhost.o 00:04:41.581 LINK app_repeat 00:04:41.581 CXX test/cpp_headers/bit_array.o 00:04:41.581 LINK vhost 00:04:41.581 LINK spdk_nvme_perf 00:04:41.839 CC examples/ioat/perf/perf.o 00:04:41.839 CXX test/cpp_headers/bit_pool.o 00:04:41.839 CC app/spdk_dd/spdk_dd.o 00:04:41.839 LINK spdk_nvme_identify 00:04:41.839 CC test/event/scheduler/scheduler.o 00:04:41.839 LINK iscsi_fuzz 00:04:41.839 CXX test/cpp_headers/blob_bdev.o 00:04:41.839 CC examples/ioat/verify/verify.o 00:04:42.097 LINK ioat_perf 00:04:42.097 CC app/fio/nvme/fio_plugin.o 00:04:42.097 LINK scheduler 00:04:42.097 CC test/nvme/aer/aer.o 00:04:42.097 CC test/nvme/reset/reset.o 00:04:42.097 LINK spdk_dd 00:04:42.097 CXX test/cpp_headers/blobfs_bdev.o 00:04:42.097 CXX test/cpp_headers/blobfs.o 00:04:42.097 LINK verify 00:04:42.356 LINK spdk_top 00:04:42.356 CC test/nvme/sgl/sgl.o 00:04:42.356 CXX test/cpp_headers/blob.o 00:04:42.356 CXX test/cpp_headers/conf.o 00:04:42.356 CC test/nvme/e2edp/nvme_dp.o 00:04:42.356 LINK aer 00:04:42.356 LINK reset 00:04:42.356 CXX test/cpp_headers/config.o 00:04:42.356 CC examples/nvme/hello_world/hello_world.o 00:04:42.615 CXX test/cpp_headers/cpuset.o 00:04:42.615 CC test/rpc_client/rpc_client_test.o 00:04:42.615 CXX test/cpp_headers/crc16.o 00:04:42.615 LINK sgl 00:04:42.615 CXX test/cpp_headers/crc32.o 00:04:42.615 LINK spdk_nvme 00:04:42.615 CC test/thread/poller_perf/poller_perf.o 00:04:42.615 LINK nvme_dp 00:04:42.615 LINK rpc_client_test 00:04:42.615 LINK hello_world 00:04:42.615 CC test/nvme/overhead/overhead.o 00:04:42.874 CXX test/cpp_headers/crc64.o 00:04:42.874 LINK poller_perf 00:04:42.874 CC examples/nvme/reconnect/reconnect.o 00:04:42.874 CC app/fio/bdev/fio_plugin.o 00:04:42.874 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:42.874 CC test/nvme/err_injection/err_injection.o 00:04:42.874 CXX test/cpp_headers/dif.o 00:04:42.874 CC test/nvme/startup/startup.o 00:04:42.874 CC examples/nvme/arbitration/arbitration.o 00:04:43.133 CC test/nvme/reserve/reserve.o 00:04:43.133 LINK overhead 00:04:43.133 LINK err_injection 00:04:43.133 CXX test/cpp_headers/dma.o 00:04:43.133 LINK startup 00:04:43.133 LINK reconnect 00:04:43.133 CXX test/cpp_headers/endian.o 00:04:43.133 LINK reserve 00:04:43.392 LINK spdk_bdev 00:04:43.392 CXX test/cpp_headers/env_dpdk.o 00:04:43.392 LINK nvme_manage 00:04:43.392 LINK arbitration 00:04:43.392 CC test/nvme/simple_copy/simple_copy.o 00:04:43.392 CC test/nvme/connect_stress/connect_stress.o 00:04:43.392 CC test/nvme/boot_partition/boot_partition.o 00:04:43.392 CXX test/cpp_headers/env.o 00:04:43.392 CC test/nvme/compliance/nvme_compliance.o 00:04:43.392 CC examples/nvme/hotplug/hotplug.o 00:04:43.392 CXX test/cpp_headers/event.o 00:04:43.651 LINK connect_stress 00:04:43.651 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.651 LINK boot_partition 00:04:43.651 CC examples/nvme/abort/abort.o 00:04:43.651 LINK simple_copy 00:04:43.651 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.651 CXX test/cpp_headers/fd_group.o 00:04:43.651 CXX test/cpp_headers/fd.o 00:04:43.651 CXX test/cpp_headers/file.o 00:04:43.651 LINK hotplug 00:04:43.651 LINK cmb_copy 00:04:43.651 CXX test/cpp_headers/ftl.o 00:04:43.910 LINK nvme_compliance 00:04:43.910 LINK pmr_persistence 00:04:43.910 CXX test/cpp_headers/gpt_spec.o 00:04:43.910 CXX test/cpp_headers/hexlify.o 00:04:43.910 CXX test/cpp_headers/histogram_data.o 00:04:43.910 CXX test/cpp_headers/idxd.o 00:04:43.910 LINK abort 00:04:43.910 CXX test/cpp_headers/idxd_spec.o 00:04:43.910 CC test/nvme/fused_ordering/fused_ordering.o 00:04:44.168 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:44.168 CC test/nvme/fdp/fdp.o 00:04:44.168 CXX test/cpp_headers/init.o 00:04:44.168 CXX test/cpp_headers/ioat.o 00:04:44.168 CC examples/sock/hello_world/hello_sock.o 00:04:44.168 CC test/nvme/cuse/cuse.o 00:04:44.168 CXX test/cpp_headers/ioat_spec.o 00:04:44.168 LINK fused_ordering 00:04:44.168 CC examples/vmd/lsvmd/lsvmd.o 00:04:44.168 LINK doorbell_aers 00:04:44.168 CXX test/cpp_headers/iscsi_spec.o 00:04:44.427 CC examples/vmd/led/led.o 00:04:44.427 LINK fdp 00:04:44.427 LINK hello_sock 00:04:44.427 LINK lsvmd 00:04:44.427 CXX test/cpp_headers/json.o 00:04:44.427 LINK led 00:04:44.427 CC examples/nvmf/nvmf/nvmf.o 00:04:44.427 CC examples/util/zipf/zipf.o 00:04:44.685 CXX test/cpp_headers/jsonrpc.o 00:04:44.685 CXX test/cpp_headers/likely.o 00:04:44.685 CC examples/thread/thread/thread_ex.o 00:04:44.685 CXX test/cpp_headers/log.o 00:04:44.685 CC examples/idxd/perf/perf.o 00:04:44.685 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.685 LINK zipf 00:04:44.685 CXX test/cpp_headers/lvol.o 00:04:44.685 CXX test/cpp_headers/memory.o 00:04:44.685 LINK nvmf 00:04:44.943 CXX test/cpp_headers/mmio.o 00:04:44.943 CXX test/cpp_headers/nbd.o 00:04:44.943 LINK interrupt_tgt 00:04:44.943 CXX test/cpp_headers/notify.o 00:04:44.943 LINK thread 00:04:44.943 CXX test/cpp_headers/nvme.o 00:04:44.943 CXX test/cpp_headers/nvme_intel.o 00:04:44.943 CXX test/cpp_headers/nvme_ocssd.o 00:04:44.943 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:45.202 CXX test/cpp_headers/nvme_spec.o 00:04:45.202 LINK idxd_perf 00:04:45.202 CXX test/cpp_headers/nvme_zns.o 00:04:45.202 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.202 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.202 CXX test/cpp_headers/nvmf.o 00:04:45.202 CXX test/cpp_headers/nvmf_spec.o 00:04:45.202 CXX test/cpp_headers/nvmf_transport.o 00:04:45.202 CXX test/cpp_headers/opal.o 00:04:45.202 LINK cuse 00:04:45.202 CXX test/cpp_headers/opal_spec.o 00:04:45.202 CXX test/cpp_headers/pci_ids.o 00:04:45.460 CXX test/cpp_headers/pipe.o 00:04:45.460 CXX test/cpp_headers/queue.o 00:04:45.460 CXX test/cpp_headers/reduce.o 00:04:45.460 CXX test/cpp_headers/rpc.o 00:04:45.460 CXX test/cpp_headers/scheduler.o 00:04:45.460 CXX test/cpp_headers/scsi.o 00:04:45.460 CXX test/cpp_headers/scsi_spec.o 00:04:45.460 CXX test/cpp_headers/sock.o 00:04:45.460 CXX test/cpp_headers/stdinc.o 00:04:45.460 CXX test/cpp_headers/string.o 00:04:45.460 CXX test/cpp_headers/thread.o 00:04:45.460 CXX test/cpp_headers/trace.o 00:04:45.460 CXX test/cpp_headers/trace_parser.o 00:04:45.460 CXX test/cpp_headers/tree.o 00:04:45.719 CXX test/cpp_headers/ublk.o 00:04:45.719 CXX test/cpp_headers/util.o 00:04:45.719 CXX test/cpp_headers/uuid.o 00:04:45.719 CXX test/cpp_headers/version.o 00:04:45.719 CXX test/cpp_headers/vfio_user_pci.o 00:04:45.719 CXX test/cpp_headers/vfio_user_spec.o 00:04:45.719 CXX test/cpp_headers/vhost.o 00:04:45.719 CXX test/cpp_headers/vmd.o 00:04:45.719 CXX test/cpp_headers/xor.o 00:04:45.719 CXX test/cpp_headers/zipf.o 00:04:46.023 LINK esnap 00:04:46.324 00:04:46.324 real 0m52.044s 00:04:46.324 user 5m1.870s 00:04:46.324 sys 0m57.566s 00:04:46.324 ************************************ 00:04:46.324 END TEST make 00:04:46.324 ************************************ 00:04:46.324 18:13:44 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:46.324 18:13:44 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.324 18:13:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.324 18:13:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.324 18:13:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.583 18:13:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.583 18:13:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.583 18:13:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.583 18:13:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.583 18:13:44 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.583 18:13:44 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.583 18:13:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.583 18:13:44 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.583 18:13:44 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.583 18:13:44 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.583 18:13:44 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.583 18:13:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.583 18:13:44 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.583 18:13:44 -- scripts/common.sh@344 -- # : 1 00:04:46.583 18:13:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.583 18:13:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.583 18:13:44 -- scripts/common.sh@364 -- # decimal 1 00:04:46.583 18:13:44 -- scripts/common.sh@352 -- # local d=1 00:04:46.583 18:13:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.583 18:13:44 -- scripts/common.sh@354 -- # echo 1 00:04:46.583 18:13:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.583 18:13:44 -- scripts/common.sh@365 -- # decimal 2 00:04:46.583 18:13:44 -- scripts/common.sh@352 -- # local d=2 00:04:46.583 18:13:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.583 18:13:44 -- scripts/common.sh@354 -- # echo 2 00:04:46.583 18:13:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.583 18:13:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.583 18:13:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.583 18:13:44 -- scripts/common.sh@367 -- # return 0 00:04:46.583 18:13:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.583 18:13:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.583 --rc genhtml_branch_coverage=1 00:04:46.583 --rc genhtml_function_coverage=1 00:04:46.583 --rc genhtml_legend=1 00:04:46.583 --rc geninfo_all_blocks=1 00:04:46.583 --rc geninfo_unexecuted_blocks=1 00:04:46.583 00:04:46.583 ' 00:04:46.583 18:13:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.583 --rc genhtml_branch_coverage=1 00:04:46.583 --rc genhtml_function_coverage=1 00:04:46.583 --rc genhtml_legend=1 00:04:46.583 --rc geninfo_all_blocks=1 00:04:46.584 --rc geninfo_unexecuted_blocks=1 00:04:46.584 00:04:46.584 ' 00:04:46.584 18:13:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.584 --rc genhtml_branch_coverage=1 00:04:46.584 --rc genhtml_function_coverage=1 00:04:46.584 --rc genhtml_legend=1 00:04:46.584 --rc geninfo_all_blocks=1 00:04:46.584 --rc geninfo_unexecuted_blocks=1 00:04:46.584 00:04:46.584 ' 00:04:46.584 18:13:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.584 --rc genhtml_branch_coverage=1 00:04:46.584 --rc genhtml_function_coverage=1 00:04:46.584 --rc genhtml_legend=1 00:04:46.584 --rc geninfo_all_blocks=1 00:04:46.584 --rc geninfo_unexecuted_blocks=1 00:04:46.584 00:04:46.584 ' 00:04:46.584 18:13:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.584 18:13:44 -- nvmf/common.sh@7 -- # uname -s 00:04:46.584 18:13:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.584 18:13:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.584 18:13:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.584 18:13:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.584 18:13:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.584 18:13:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.584 18:13:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.584 18:13:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.584 18:13:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.584 18:13:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.584 18:13:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:04:46.584 18:13:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:04:46.584 18:13:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.584 18:13:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.584 18:13:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:46.584 18:13:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.584 18:13:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.584 18:13:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.584 18:13:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.584 18:13:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.584 18:13:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.584 18:13:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.584 18:13:44 -- paths/export.sh@5 -- # export PATH 00:04:46.584 18:13:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.584 18:13:44 -- nvmf/common.sh@46 -- # : 0 00:04:46.584 18:13:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:46.584 18:13:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:46.584 18:13:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:46.584 18:13:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.584 18:13:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.584 18:13:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:46.584 18:13:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:46.584 18:13:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:46.584 18:13:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:46.584 18:13:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:46.584 18:13:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:46.584 18:13:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:46.584 18:13:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.584 18:13:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:46.584 18:13:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.584 18:13:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:46.584 18:13:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:46.584 18:13:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:46.584 18:13:44 -- spdk/autotest.sh@48 -- # udevadm_pid=59785 00:04:46.584 18:13:44 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:46.584 18:13:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:46.584 18:13:44 -- spdk/autotest.sh@54 -- # echo 59795 00:04:46.584 18:13:44 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:46.584 18:13:44 -- spdk/autotest.sh@56 -- # echo 59798 00:04:46.584 18:13:44 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:46.584 18:13:44 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:46.584 18:13:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:46.584 18:13:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:46.584 18:13:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.584 18:13:44 -- common/autotest_common.sh@10 -- # set +x 00:04:46.584 18:13:44 -- spdk/autotest.sh@70 -- # create_test_list 00:04:46.584 18:13:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:46.584 18:13:44 -- common/autotest_common.sh@10 -- # set +x 00:04:46.584 18:13:44 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:46.584 18:13:44 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:46.584 18:13:44 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:46.584 18:13:44 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:46.584 18:13:44 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:46.584 18:13:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:46.584 18:13:44 -- common/autotest_common.sh@1450 -- # uname 00:04:46.584 18:13:44 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:46.584 18:13:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:46.584 18:13:44 -- common/autotest_common.sh@1470 -- # uname 00:04:46.584 18:13:44 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:46.584 18:13:44 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:46.584 18:13:44 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:46.584 lcov: LCOV version 1.15 00:04:46.584 18:13:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:54.704 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:54.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:54.704 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:54.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:54.704 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:54.704 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:12.795 18:14:10 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:12.795 18:14:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.795 18:14:10 -- common/autotest_common.sh@10 -- # set +x 00:05:12.795 18:14:10 -- spdk/autotest.sh@89 -- # rm -f 00:05:12.795 18:14:10 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.363 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:13.622 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:13.622 18:14:11 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:13.622 18:14:11 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:13.622 18:14:11 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:13.622 18:14:11 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:13.622 18:14:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.622 18:14:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:13.622 18:14:11 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:13.622 18:14:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.622 18:14:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:13.622 18:14:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:13.622 18:14:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.622 18:14:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:13.622 18:14:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:13.622 18:14:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.622 18:14:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:13.622 18:14:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:13.622 18:14:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:13.622 18:14:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.622 18:14:11 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:13.622 18:14:11 -- spdk/autotest.sh@108 -- # grep -v p 00:05:13.622 18:14:11 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:13.622 18:14:11 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:13.622 18:14:11 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:13.622 18:14:11 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:13.623 18:14:11 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:13.623 18:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:13.623 No valid GPT data, bailing 00:05:13.623 18:14:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:13.623 18:14:11 -- scripts/common.sh@393 -- # pt= 00:05:13.623 18:14:11 -- scripts/common.sh@394 -- # return 1 00:05:13.623 18:14:11 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:13.623 1+0 records in 00:05:13.623 1+0 records out 00:05:13.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478339 s, 219 MB/s 00:05:13.623 18:14:11 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:13.623 18:14:11 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:13.623 18:14:11 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:13.623 18:14:11 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:13.623 18:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:13.623 No valid GPT data, bailing 00:05:13.623 18:14:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:13.623 18:14:11 -- scripts/common.sh@393 -- # pt= 00:05:13.623 18:14:11 -- scripts/common.sh@394 -- # return 1 00:05:13.623 18:14:11 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:13.623 1+0 records in 00:05:13.623 1+0 records out 00:05:13.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00323913 s, 324 MB/s 00:05:13.623 18:14:11 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:13.623 18:14:11 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:13.623 18:14:11 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:13.623 18:14:11 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:13.623 18:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:13.623 No valid GPT data, bailing 00:05:13.882 18:14:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:13.882 18:14:11 -- scripts/common.sh@393 -- # pt= 00:05:13.882 18:14:11 -- scripts/common.sh@394 -- # return 1 00:05:13.882 18:14:11 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:13.882 1+0 records in 00:05:13.882 1+0 records out 00:05:13.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386829 s, 271 MB/s 00:05:13.882 18:14:11 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:13.882 18:14:11 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:13.882 18:14:11 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:13.882 18:14:11 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:13.882 18:14:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:13.882 No valid GPT data, bailing 00:05:13.882 18:14:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:13.882 18:14:11 -- scripts/common.sh@393 -- # pt= 00:05:13.882 18:14:11 -- scripts/common.sh@394 -- # return 1 00:05:13.882 18:14:11 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:13.882 1+0 records in 00:05:13.882 1+0 records out 00:05:13.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428762 s, 245 MB/s 00:05:13.882 18:14:11 -- spdk/autotest.sh@116 -- # sync 00:05:14.451 18:14:12 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:14.451 18:14:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:14.451 18:14:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:16.356 18:14:14 -- spdk/autotest.sh@122 -- # uname -s 00:05:16.356 18:14:14 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:16.356 18:14:14 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:16.356 18:14:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.356 18:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.356 18:14:14 -- common/autotest_common.sh@10 -- # set +x 00:05:16.356 ************************************ 00:05:16.356 START TEST setup.sh 00:05:16.356 ************************************ 00:05:16.356 18:14:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:16.356 * Looking for test storage... 00:05:16.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:16.356 18:14:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.356 18:14:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.356 18:14:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.356 18:14:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.357 18:14:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.357 18:14:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.357 18:14:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.357 18:14:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.357 18:14:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.357 18:14:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.357 18:14:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.357 18:14:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.357 18:14:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.357 18:14:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.357 18:14:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.357 18:14:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.357 18:14:14 -- scripts/common.sh@344 -- # : 1 00:05:16.357 18:14:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.357 18:14:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.357 18:14:14 -- scripts/common.sh@364 -- # decimal 1 00:05:16.357 18:14:14 -- scripts/common.sh@352 -- # local d=1 00:05:16.357 18:14:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.357 18:14:14 -- scripts/common.sh@354 -- # echo 1 00:05:16.357 18:14:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.357 18:14:14 -- scripts/common.sh@365 -- # decimal 2 00:05:16.357 18:14:14 -- scripts/common.sh@352 -- # local d=2 00:05:16.357 18:14:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.357 18:14:14 -- scripts/common.sh@354 -- # echo 2 00:05:16.357 18:14:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.357 18:14:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.357 18:14:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.357 18:14:14 -- scripts/common.sh@367 -- # return 0 00:05:16.357 18:14:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.357 18:14:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.357 --rc genhtml_branch_coverage=1 00:05:16.357 --rc genhtml_function_coverage=1 00:05:16.357 --rc genhtml_legend=1 00:05:16.357 --rc geninfo_all_blocks=1 00:05:16.357 --rc geninfo_unexecuted_blocks=1 00:05:16.357 00:05:16.357 ' 00:05:16.357 18:14:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.357 --rc genhtml_branch_coverage=1 00:05:16.357 --rc genhtml_function_coverage=1 00:05:16.357 --rc genhtml_legend=1 00:05:16.357 --rc geninfo_all_blocks=1 00:05:16.357 --rc geninfo_unexecuted_blocks=1 00:05:16.357 00:05:16.357 ' 00:05:16.357 18:14:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.357 --rc genhtml_branch_coverage=1 00:05:16.357 --rc genhtml_function_coverage=1 00:05:16.357 --rc genhtml_legend=1 00:05:16.357 --rc geninfo_all_blocks=1 00:05:16.357 --rc geninfo_unexecuted_blocks=1 00:05:16.357 00:05:16.357 ' 00:05:16.357 18:14:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.357 --rc genhtml_branch_coverage=1 00:05:16.357 --rc genhtml_function_coverage=1 00:05:16.357 --rc genhtml_legend=1 00:05:16.357 --rc geninfo_all_blocks=1 00:05:16.357 --rc geninfo_unexecuted_blocks=1 00:05:16.357 00:05:16.357 ' 00:05:16.357 18:14:14 -- setup/test-setup.sh@10 -- # uname -s 00:05:16.357 18:14:14 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:16.357 18:14:14 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:16.357 18:14:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.357 18:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.357 18:14:14 -- common/autotest_common.sh@10 -- # set +x 00:05:16.357 ************************************ 00:05:16.357 START TEST acl 00:05:16.357 ************************************ 00:05:16.357 18:14:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:16.617 * Looking for test storage... 00:05:16.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:16.617 18:14:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.617 18:14:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.617 18:14:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.617 18:14:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.617 18:14:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.617 18:14:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.617 18:14:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.617 18:14:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.617 18:14:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.617 18:14:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.617 18:14:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.617 18:14:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.617 18:14:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.617 18:14:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.617 18:14:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.617 18:14:14 -- scripts/common.sh@344 -- # : 1 00:05:16.617 18:14:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.617 18:14:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.617 18:14:14 -- scripts/common.sh@364 -- # decimal 1 00:05:16.617 18:14:14 -- scripts/common.sh@352 -- # local d=1 00:05:16.617 18:14:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.617 18:14:14 -- scripts/common.sh@354 -- # echo 1 00:05:16.617 18:14:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.617 18:14:14 -- scripts/common.sh@365 -- # decimal 2 00:05:16.617 18:14:14 -- scripts/common.sh@352 -- # local d=2 00:05:16.617 18:14:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.617 18:14:14 -- scripts/common.sh@354 -- # echo 2 00:05:16.617 18:14:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.617 18:14:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.617 18:14:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.617 18:14:14 -- scripts/common.sh@367 -- # return 0 00:05:16.617 18:14:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.617 18:14:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.617 --rc genhtml_branch_coverage=1 00:05:16.617 --rc genhtml_function_coverage=1 00:05:16.617 --rc genhtml_legend=1 00:05:16.617 --rc geninfo_all_blocks=1 00:05:16.617 --rc geninfo_unexecuted_blocks=1 00:05:16.617 00:05:16.617 ' 00:05:16.617 18:14:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.617 --rc genhtml_branch_coverage=1 00:05:16.617 --rc genhtml_function_coverage=1 00:05:16.617 --rc genhtml_legend=1 00:05:16.617 --rc geninfo_all_blocks=1 00:05:16.617 --rc geninfo_unexecuted_blocks=1 00:05:16.617 00:05:16.617 ' 00:05:16.617 18:14:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.617 --rc genhtml_branch_coverage=1 00:05:16.617 --rc genhtml_function_coverage=1 00:05:16.617 --rc genhtml_legend=1 00:05:16.617 --rc geninfo_all_blocks=1 00:05:16.617 --rc geninfo_unexecuted_blocks=1 00:05:16.617 00:05:16.617 ' 00:05:16.617 18:14:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.617 --rc genhtml_branch_coverage=1 00:05:16.617 --rc genhtml_function_coverage=1 00:05:16.617 --rc genhtml_legend=1 00:05:16.617 --rc geninfo_all_blocks=1 00:05:16.617 --rc geninfo_unexecuted_blocks=1 00:05:16.617 00:05:16.617 ' 00:05:16.617 18:14:14 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:16.617 18:14:14 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:16.617 18:14:14 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:16.617 18:14:14 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:16.617 18:14:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.617 18:14:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:16.617 18:14:14 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:16.617 18:14:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.617 18:14:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:16.617 18:14:14 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:16.617 18:14:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.617 18:14:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:16.617 18:14:14 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:16.617 18:14:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.617 18:14:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:16.617 18:14:14 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:16.617 18:14:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:16.617 18:14:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.617 18:14:14 -- setup/acl.sh@12 -- # devs=() 00:05:16.617 18:14:14 -- setup/acl.sh@12 -- # declare -a devs 00:05:16.617 18:14:14 -- setup/acl.sh@13 -- # drivers=() 00:05:16.617 18:14:14 -- setup/acl.sh@13 -- # declare -A drivers 00:05:16.617 18:14:14 -- setup/acl.sh@51 -- # setup reset 00:05:16.617 18:14:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.617 18:14:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.623 18:14:15 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:17.623 18:14:15 -- setup/acl.sh@16 -- # local dev driver 00:05:17.623 18:14:15 -- setup/acl.sh@15 -- # setup output status 00:05:17.623 18:14:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.623 18:14:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.623 18:14:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:17.623 Hugepages 00:05:17.624 node hugesize free / total 00:05:17.624 18:14:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:17.624 18:14:15 -- setup/acl.sh@19 -- # continue 00:05:17.624 18:14:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.624 00:05:17.624 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.624 18:14:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:17.624 18:14:15 -- setup/acl.sh@19 -- # continue 00:05:17.624 18:14:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.624 18:14:15 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:17.624 18:14:15 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:17.624 18:14:15 -- setup/acl.sh@20 -- # continue 00:05:17.624 18:14:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.624 18:14:15 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:17.624 18:14:15 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:17.624 18:14:15 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:17.624 18:14:15 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:17.624 18:14:15 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:17.624 18:14:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.888 18:14:15 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:17.888 18:14:15 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:17.888 18:14:15 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:17.888 18:14:15 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:17.888 18:14:15 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:17.888 18:14:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.888 18:14:15 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:17.888 18:14:15 -- setup/acl.sh@54 -- # run_test denied denied 00:05:17.888 18:14:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.888 18:14:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.888 18:14:15 -- common/autotest_common.sh@10 -- # set +x 00:05:17.888 ************************************ 00:05:17.888 START TEST denied 00:05:17.888 ************************************ 00:05:17.888 18:14:15 -- common/autotest_common.sh@1114 -- # denied 00:05:17.888 18:14:15 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:17.888 18:14:15 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:17.888 18:14:15 -- setup/acl.sh@38 -- # setup output config 00:05:17.888 18:14:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.888 18:14:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.825 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:18.825 18:14:16 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:18.825 18:14:16 -- setup/acl.sh@28 -- # local dev driver 00:05:18.825 18:14:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:18.825 18:14:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:18.825 18:14:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:18.825 18:14:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:18.825 18:14:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:18.825 18:14:16 -- setup/acl.sh@41 -- # setup reset 00:05:18.825 18:14:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.825 18:14:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.084 00:05:19.084 real 0m1.438s 00:05:19.084 user 0m0.614s 00:05:19.084 sys 0m0.780s 00:05:19.084 18:14:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.084 18:14:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.084 ************************************ 00:05:19.084 END TEST denied 00:05:19.084 ************************************ 00:05:19.343 18:14:17 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:19.343 18:14:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.343 18:14:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.343 18:14:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.343 ************************************ 00:05:19.343 START TEST allowed 00:05:19.343 ************************************ 00:05:19.343 18:14:17 -- common/autotest_common.sh@1114 -- # allowed 00:05:19.343 18:14:17 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:19.343 18:14:17 -- setup/acl.sh@45 -- # setup output config 00:05:19.343 18:14:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.343 18:14:17 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:19.343 18:14:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.911 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.911 18:14:18 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:19.911 18:14:18 -- setup/acl.sh@28 -- # local dev driver 00:05:19.911 18:14:18 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:19.911 18:14:18 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:19.911 18:14:18 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:19.911 18:14:18 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:19.911 18:14:18 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:19.911 18:14:18 -- setup/acl.sh@48 -- # setup reset 00:05:19.911 18:14:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.911 18:14:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.849 00:05:20.849 real 0m1.471s 00:05:20.849 user 0m0.668s 00:05:20.849 sys 0m0.807s 00:05:20.849 18:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.849 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.849 ************************************ 00:05:20.849 END TEST allowed 00:05:20.849 ************************************ 00:05:20.849 00:05:20.849 real 0m4.300s 00:05:20.849 user 0m1.957s 00:05:20.849 sys 0m2.326s 00:05:20.849 18:14:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.849 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.849 ************************************ 00:05:20.849 END TEST acl 00:05:20.849 ************************************ 00:05:20.849 18:14:18 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:20.849 18:14:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.849 18:14:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.849 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.849 ************************************ 00:05:20.849 START TEST hugepages 00:05:20.849 ************************************ 00:05:20.849 18:14:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:20.849 * Looking for test storage... 00:05:20.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.849 18:14:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.849 18:14:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.849 18:14:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.849 18:14:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.849 18:14:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.849 18:14:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.849 18:14:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.849 18:14:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.849 18:14:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.849 18:14:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.849 18:14:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.849 18:14:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.849 18:14:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.849 18:14:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.849 18:14:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.849 18:14:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.849 18:14:19 -- scripts/common.sh@344 -- # : 1 00:05:20.849 18:14:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.849 18:14:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.849 18:14:19 -- scripts/common.sh@364 -- # decimal 1 00:05:20.849 18:14:19 -- scripts/common.sh@352 -- # local d=1 00:05:20.849 18:14:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.849 18:14:19 -- scripts/common.sh@354 -- # echo 1 00:05:21.110 18:14:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.110 18:14:19 -- scripts/common.sh@365 -- # decimal 2 00:05:21.110 18:14:19 -- scripts/common.sh@352 -- # local d=2 00:05:21.110 18:14:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.110 18:14:19 -- scripts/common.sh@354 -- # echo 2 00:05:21.110 18:14:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.110 18:14:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.110 18:14:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.110 18:14:19 -- scripts/common.sh@367 -- # return 0 00:05:21.110 18:14:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.110 18:14:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.110 --rc genhtml_branch_coverage=1 00:05:21.110 --rc genhtml_function_coverage=1 00:05:21.110 --rc genhtml_legend=1 00:05:21.110 --rc geninfo_all_blocks=1 00:05:21.110 --rc geninfo_unexecuted_blocks=1 00:05:21.110 00:05:21.110 ' 00:05:21.110 18:14:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.110 --rc genhtml_branch_coverage=1 00:05:21.110 --rc genhtml_function_coverage=1 00:05:21.110 --rc genhtml_legend=1 00:05:21.110 --rc geninfo_all_blocks=1 00:05:21.110 --rc geninfo_unexecuted_blocks=1 00:05:21.110 00:05:21.110 ' 00:05:21.110 18:14:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.110 --rc genhtml_branch_coverage=1 00:05:21.110 --rc genhtml_function_coverage=1 00:05:21.110 --rc genhtml_legend=1 00:05:21.110 --rc geninfo_all_blocks=1 00:05:21.110 --rc geninfo_unexecuted_blocks=1 00:05:21.110 00:05:21.110 ' 00:05:21.110 18:14:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.110 --rc genhtml_branch_coverage=1 00:05:21.110 --rc genhtml_function_coverage=1 00:05:21.110 --rc genhtml_legend=1 00:05:21.110 --rc geninfo_all_blocks=1 00:05:21.110 --rc geninfo_unexecuted_blocks=1 00:05:21.110 00:05:21.110 ' 00:05:21.110 18:14:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:21.111 18:14:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:21.111 18:14:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:21.111 18:14:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:21.111 18:14:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:21.111 18:14:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:21.111 18:14:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:21.111 18:14:19 -- setup/common.sh@18 -- # local node= 00:05:21.111 18:14:19 -- setup/common.sh@19 -- # local var val 00:05:21.111 18:14:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.111 18:14:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.111 18:14:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.111 18:14:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.111 18:14:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.111 18:14:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 4835552 kB' 'MemAvailable: 7337300 kB' 'Buffers: 2684 kB' 'Cached: 2706240 kB' 'SwapCached: 0 kB' 'Active: 455264 kB' 'Inactive: 2370584 kB' 'Active(anon): 127436 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118584 kB' 'Mapped: 51060 kB' 'Shmem: 10512 kB' 'KReclaimable: 80564 kB' 'Slab: 181468 kB' 'SReclaimable: 80564 kB' 'SUnreclaim: 100904 kB' 'KernelStack: 6880 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 319972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.111 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.111 18:14:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # continue 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.112 18:14:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.112 18:14:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:21.112 18:14:19 -- setup/common.sh@33 -- # echo 2048 00:05:21.112 18:14:19 -- setup/common.sh@33 -- # return 0 00:05:21.112 18:14:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:21.112 18:14:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:21.112 18:14:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:21.112 18:14:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:21.112 18:14:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:21.112 18:14:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:21.112 18:14:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:21.112 18:14:19 -- setup/hugepages.sh@207 -- # get_nodes 00:05:21.112 18:14:19 -- setup/hugepages.sh@27 -- # local node 00:05:21.112 18:14:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.112 18:14:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:21.112 18:14:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.112 18:14:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.112 18:14:19 -- setup/hugepages.sh@208 -- # clear_hp 00:05:21.112 18:14:19 -- setup/hugepages.sh@37 -- # local node hp 00:05:21.112 18:14:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.112 18:14:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.112 18:14:19 -- setup/hugepages.sh@41 -- # echo 0 00:05:21.112 18:14:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.112 18:14:19 -- setup/hugepages.sh@41 -- # echo 0 00:05:21.112 18:14:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:21.112 18:14:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:21.112 18:14:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:21.112 18:14:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.112 18:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.112 18:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.112 ************************************ 00:05:21.112 START TEST default_setup 00:05:21.112 ************************************ 00:05:21.112 18:14:19 -- common/autotest_common.sh@1114 -- # default_setup 00:05:21.112 18:14:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:21.112 18:14:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.112 18:14:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.112 18:14:19 -- setup/hugepages.sh@51 -- # shift 00:05:21.112 18:14:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.112 18:14:19 -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.112 18:14:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.112 18:14:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.112 18:14:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.112 18:14:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.112 18:14:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.112 18:14:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.112 18:14:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.112 18:14:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.112 18:14:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.112 18:14:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.112 18:14:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.112 18:14:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:21.112 18:14:19 -- setup/hugepages.sh@73 -- # return 0 00:05:21.112 18:14:19 -- setup/hugepages.sh@137 -- # setup output 00:05:21.112 18:14:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.112 18:14:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.681 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.946 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.946 18:14:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:21.946 18:14:20 -- setup/hugepages.sh@89 -- # local node 00:05:21.946 18:14:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.946 18:14:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.946 18:14:20 -- setup/hugepages.sh@92 -- # local surp 00:05:21.946 18:14:20 -- setup/hugepages.sh@93 -- # local resv 00:05:21.946 18:14:20 -- setup/hugepages.sh@94 -- # local anon 00:05:21.946 18:14:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.946 18:14:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.946 18:14:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.946 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:21.946 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:21.946 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.946 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.946 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.946 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.946 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.946 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6947180 kB' 'MemAvailable: 9448884 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456348 kB' 'Inactive: 2370596 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 80456 kB' 'Slab: 181312 kB' 'SReclaimable: 80456 kB' 'SUnreclaim: 100856 kB' 'KernelStack: 6800 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.946 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.946 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.947 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:21.947 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:21.947 18:14:20 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.947 18:14:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.947 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.947 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:21.947 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:21.947 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.947 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.947 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.947 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.947 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.947 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6947432 kB' 'MemAvailable: 9449136 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456472 kB' 'Inactive: 2370596 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 50996 kB' 'Shmem: 10488 kB' 'KReclaimable: 80456 kB' 'Slab: 181308 kB' 'SReclaimable: 80456 kB' 'SUnreclaim: 100852 kB' 'KernelStack: 6752 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.947 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.947 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.948 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.948 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.948 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:21.948 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:21.948 18:14:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.948 18:14:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.948 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.948 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:21.948 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:21.948 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.948 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.948 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.948 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.948 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.949 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6947184 kB' 'MemAvailable: 9448792 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456132 kB' 'Inactive: 2370596 kB' 'Active(anon): 128304 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181128 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100864 kB' 'KernelStack: 6800 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.949 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.949 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.950 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:21.950 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:21.950 18:14:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.950 nr_hugepages=1024 00:05:21.950 18:14:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.950 resv_hugepages=0 00:05:21.950 18:14:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.950 surplus_hugepages=0 00:05:21.950 18:14:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.950 anon_hugepages=0 00:05:21.950 18:14:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.950 18:14:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.950 18:14:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.950 18:14:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.950 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.950 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:21.950 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:21.950 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.950 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.950 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.950 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.950 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.950 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6947704 kB' 'MemAvailable: 9449312 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456132 kB' 'Inactive: 2370596 kB' 'Active(anon): 128304 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181128 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100864 kB' 'KernelStack: 6800 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.950 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.950 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.951 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.951 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.952 18:14:20 -- setup/common.sh@33 -- # echo 1024 00:05:21.952 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:21.952 18:14:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.952 18:14:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.952 18:14:20 -- setup/hugepages.sh@27 -- # local node 00:05:21.952 18:14:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.952 18:14:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.952 18:14:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.952 18:14:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.952 18:14:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.952 18:14:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.952 18:14:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.952 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.952 18:14:20 -- setup/common.sh@18 -- # local node=0 00:05:21.952 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:21.952 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.952 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.952 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.952 18:14:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.952 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.952 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6948056 kB' 'MemUsed: 5291068 kB' 'SwapCached: 0 kB' 'Active: 456016 kB' 'Inactive: 2370596 kB' 'Active(anon): 128188 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2708916 kB' 'Mapped: 50836 kB' 'AnonPages: 119268 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181128 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.952 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.952 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # continue 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.953 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.953 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.953 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:21.953 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:21.953 18:14:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.953 18:14:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.953 18:14:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.953 18:14:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.953 node0=1024 expecting 1024 00:05:21.953 18:14:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.953 18:14:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.953 00:05:21.953 real 0m0.969s 00:05:21.953 user 0m0.440s 00:05:21.953 sys 0m0.464s 00:05:21.953 18:14:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.953 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:21.953 ************************************ 00:05:21.953 END TEST default_setup 00:05:21.953 ************************************ 00:05:21.953 18:14:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:21.953 18:14:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.953 18:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.953 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:21.953 ************************************ 00:05:21.953 START TEST per_node_1G_alloc 00:05:21.953 ************************************ 00:05:21.953 18:14:20 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:21.953 18:14:20 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:21.953 18:14:20 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:21.953 18:14:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:21.953 18:14:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.953 18:14:20 -- setup/hugepages.sh@51 -- # shift 00:05:21.953 18:14:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.953 18:14:20 -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.953 18:14:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.953 18:14:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:21.953 18:14:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.953 18:14:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.953 18:14:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.953 18:14:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:21.953 18:14:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.953 18:14:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.953 18:14:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.953 18:14:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.953 18:14:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.953 18:14:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:21.953 18:14:20 -- setup/hugepages.sh@73 -- # return 0 00:05:21.953 18:14:20 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:21.953 18:14:20 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:21.953 18:14:20 -- setup/hugepages.sh@146 -- # setup output 00:05:21.953 18:14:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.953 18:14:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.526 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.526 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.526 18:14:20 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:22.526 18:14:20 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:22.526 18:14:20 -- setup/hugepages.sh@89 -- # local node 00:05:22.526 18:14:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.526 18:14:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.526 18:14:20 -- setup/hugepages.sh@92 -- # local surp 00:05:22.526 18:14:20 -- setup/hugepages.sh@93 -- # local resv 00:05:22.526 18:14:20 -- setup/hugepages.sh@94 -- # local anon 00:05:22.526 18:14:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.526 18:14:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.526 18:14:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.526 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:22.526 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:22.526 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.526 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.526 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.526 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.526 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.526 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8004772 kB' 'MemAvailable: 10506384 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456216 kB' 'Inactive: 2370600 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119492 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181060 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100796 kB' 'KernelStack: 6808 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.526 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.526 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.527 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:22.527 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:22.527 18:14:20 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.527 18:14:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.527 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.527 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:22.527 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:22.527 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.527 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.527 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.527 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.527 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.527 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8004772 kB' 'MemAvailable: 10506384 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456440 kB' 'Inactive: 2370600 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181060 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100796 kB' 'KernelStack: 6792 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.527 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.527 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.528 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:22.528 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:22.528 18:14:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.528 18:14:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.528 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.528 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:22.528 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:22.528 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.528 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.528 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.528 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.528 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.528 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8004940 kB' 'MemAvailable: 10506552 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456476 kB' 'Inactive: 2370600 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119756 kB' 'Mapped: 50972 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181048 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100784 kB' 'KernelStack: 6792 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.528 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.528 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.529 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.529 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.530 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:22.530 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:22.530 18:14:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.530 nr_hugepages=512 00:05:22.530 18:14:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:22.530 resv_hugepages=0 00:05:22.530 18:14:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.530 surplus_hugepages=0 00:05:22.530 18:14:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.530 anon_hugepages=0 00:05:22.530 18:14:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.530 18:14:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:22.530 18:14:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:22.530 18:14:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.530 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.530 18:14:20 -- setup/common.sh@18 -- # local node= 00:05:22.530 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:22.530 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.530 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.530 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.530 18:14:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.530 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.530 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8004940 kB' 'MemAvailable: 10506552 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456012 kB' 'Inactive: 2370600 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181072 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100808 kB' 'KernelStack: 6784 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.530 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.531 18:14:20 -- setup/common.sh@33 -- # echo 512 00:05:22.531 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:22.531 18:14:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:22.531 18:14:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.531 18:14:20 -- setup/hugepages.sh@27 -- # local node 00:05:22.531 18:14:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.531 18:14:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:22.531 18:14:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.531 18:14:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.531 18:14:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.531 18:14:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.531 18:14:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.531 18:14:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.531 18:14:20 -- setup/common.sh@18 -- # local node=0 00:05:22.531 18:14:20 -- setup/common.sh@19 -- # local var val 00:05:22.531 18:14:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.531 18:14:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.531 18:14:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.531 18:14:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.531 18:14:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.531 18:14:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8004940 kB' 'MemUsed: 4234184 kB' 'SwapCached: 0 kB' 'Active: 456272 kB' 'Inactive: 2370600 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2708916 kB' 'Mapped: 50836 kB' 'AnonPages: 119528 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181068 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # continue 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 18:14:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 18:14:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 18:14:20 -- setup/common.sh@33 -- # echo 0 00:05:22.532 18:14:20 -- setup/common.sh@33 -- # return 0 00:05:22.532 18:14:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.532 18:14:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.532 18:14:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.532 18:14:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.532 node0=512 expecting 512 00:05:22.532 18:14:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:22.532 18:14:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:22.532 00:05:22.533 real 0m0.540s 00:05:22.533 user 0m0.264s 00:05:22.533 sys 0m0.309s 00:05:22.533 18:14:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.533 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.533 ************************************ 00:05:22.533 END TEST per_node_1G_alloc 00:05:22.533 ************************************ 00:05:22.533 18:14:20 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:22.533 18:14:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.533 18:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.533 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.533 ************************************ 00:05:22.533 START TEST even_2G_alloc 00:05:22.533 ************************************ 00:05:22.792 18:14:20 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:22.792 18:14:20 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:22.792 18:14:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.792 18:14:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.792 18:14:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.792 18:14:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.792 18:14:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.792 18:14:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.792 18:14:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.792 18:14:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.792 18:14:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.792 18:14:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.792 18:14:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.792 18:14:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.792 18:14:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.792 18:14:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.792 18:14:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:22.792 18:14:20 -- setup/hugepages.sh@83 -- # : 0 00:05:22.792 18:14:20 -- setup/hugepages.sh@84 -- # : 0 00:05:22.792 18:14:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.792 18:14:20 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:22.792 18:14:20 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:22.792 18:14:20 -- setup/hugepages.sh@153 -- # setup output 00:05:22.792 18:14:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.792 18:14:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.055 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.055 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.055 18:14:21 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:23.055 18:14:21 -- setup/hugepages.sh@89 -- # local node 00:05:23.055 18:14:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.055 18:14:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.055 18:14:21 -- setup/hugepages.sh@92 -- # local surp 00:05:23.055 18:14:21 -- setup/hugepages.sh@93 -- # local resv 00:05:23.055 18:14:21 -- setup/hugepages.sh@94 -- # local anon 00:05:23.055 18:14:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.055 18:14:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.055 18:14:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.055 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.055 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.055 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.055 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.055 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.055 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.055 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.055 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.055 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6961588 kB' 'MemAvailable: 9463200 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456288 kB' 'Inactive: 2370600 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 50908 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181076 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100812 kB' 'KernelStack: 6776 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.055 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.055 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.056 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.056 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.056 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.056 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.056 18:14:21 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.056 18:14:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.056 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.056 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.056 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.056 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.056 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.056 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.057 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.057 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.057 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6961588 kB' 'MemAvailable: 9463200 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456084 kB' 'Inactive: 2370600 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119340 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181080 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100816 kB' 'KernelStack: 6776 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.057 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.057 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.058 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.058 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.058 18:14:21 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.058 18:14:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.058 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.058 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.058 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.058 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.058 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.058 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.058 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.058 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.058 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6961848 kB' 'MemAvailable: 9463460 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456060 kB' 'Inactive: 2370600 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119316 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181080 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100816 kB' 'KernelStack: 6760 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.058 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.058 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.059 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.059 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.060 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.060 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.060 18:14:21 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.060 nr_hugepages=1024 00:05:23.060 18:14:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.060 resv_hugepages=0 00:05:23.060 18:14:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.060 surplus_hugepages=0 00:05:23.060 18:14:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.060 anon_hugepages=0 00:05:23.060 18:14:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.060 18:14:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.060 18:14:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.060 18:14:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.060 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.060 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.060 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.060 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.060 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.060 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.060 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.060 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.060 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6962144 kB' 'MemAvailable: 9463756 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 455988 kB' 'Inactive: 2370600 kB' 'Active(anon): 128160 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119252 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181076 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100812 kB' 'KernelStack: 6728 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.060 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.060 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.061 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.061 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.062 18:14:21 -- setup/common.sh@33 -- # echo 1024 00:05:23.062 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.062 18:14:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.062 18:14:21 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.062 18:14:21 -- setup/hugepages.sh@27 -- # local node 00:05:23.062 18:14:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.062 18:14:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.062 18:14:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.062 18:14:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.062 18:14:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.062 18:14:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.062 18:14:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.062 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.062 18:14:21 -- setup/common.sh@18 -- # local node=0 00:05:23.062 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.062 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.062 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.062 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.062 18:14:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.062 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.062 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6962144 kB' 'MemUsed: 5276980 kB' 'SwapCached: 0 kB' 'Active: 455988 kB' 'Inactive: 2370600 kB' 'Active(anon): 128160 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2708916 kB' 'Mapped: 50964 kB' 'AnonPages: 119512 kB' 'Shmem: 10488 kB' 'KernelStack: 6796 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181076 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.062 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.062 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.063 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.063 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.063 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.063 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.063 18:14:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.063 18:14:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.063 18:14:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.063 18:14:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.063 node0=1024 expecting 1024 00:05:23.063 18:14:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.063 18:14:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.063 00:05:23.063 real 0m0.513s 00:05:23.063 user 0m0.269s 00:05:23.063 sys 0m0.278s 00:05:23.063 18:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.063 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.063 ************************************ 00:05:23.063 END TEST even_2G_alloc 00:05:23.063 ************************************ 00:05:23.322 18:14:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:23.322 18:14:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.322 18:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.322 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.322 ************************************ 00:05:23.322 START TEST odd_alloc 00:05:23.322 ************************************ 00:05:23.322 18:14:21 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:23.322 18:14:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:23.322 18:14:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:23.322 18:14:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:23.323 18:14:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.323 18:14:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:23.323 18:14:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:23.323 18:14:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.323 18:14:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.323 18:14:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:23.323 18:14:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.323 18:14:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.323 18:14:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.323 18:14:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.323 18:14:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:23.323 18:14:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.323 18:14:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:23.323 18:14:21 -- setup/hugepages.sh@83 -- # : 0 00:05:23.323 18:14:21 -- setup/hugepages.sh@84 -- # : 0 00:05:23.323 18:14:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.323 18:14:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:23.323 18:14:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:23.323 18:14:21 -- setup/hugepages.sh@160 -- # setup output 00:05:23.323 18:14:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.323 18:14:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.585 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.585 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.585 18:14:21 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:23.585 18:14:21 -- setup/hugepages.sh@89 -- # local node 00:05:23.585 18:14:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.585 18:14:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.585 18:14:21 -- setup/hugepages.sh@92 -- # local surp 00:05:23.585 18:14:21 -- setup/hugepages.sh@93 -- # local resv 00:05:23.585 18:14:21 -- setup/hugepages.sh@94 -- # local anon 00:05:23.585 18:14:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.585 18:14:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.585 18:14:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.585 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.585 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.585 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.585 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.585 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.585 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.585 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.585 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6958644 kB' 'MemAvailable: 9460256 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456500 kB' 'Inactive: 2370600 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 51032 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181096 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100832 kB' 'KernelStack: 6872 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.585 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.585 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.586 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.586 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.586 18:14:21 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.586 18:14:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.586 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.586 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.586 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.586 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.586 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.586 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.586 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.586 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.586 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6958644 kB' 'MemAvailable: 9460256 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456268 kB' 'Inactive: 2370600 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119536 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181104 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100840 kB' 'KernelStack: 6792 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.586 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.586 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.587 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.587 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.588 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.588 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.588 18:14:21 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.588 18:14:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.588 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.588 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.588 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.588 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.588 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.588 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.588 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.588 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.588 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6958644 kB' 'MemAvailable: 9460256 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456288 kB' 'Inactive: 2370600 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119580 kB' 'Mapped: 50964 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181104 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100840 kB' 'KernelStack: 6792 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.588 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.588 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.590 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.590 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.590 18:14:21 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.590 nr_hugepages=1025 00:05:23.590 18:14:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:23.590 resv_hugepages=0 00:05:23.590 18:14:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.590 surplus_hugepages=0 00:05:23.590 18:14:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.590 anon_hugepages=0 00:05:23.590 18:14:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.590 18:14:21 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:23.590 18:14:21 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:23.590 18:14:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.590 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.590 18:14:21 -- setup/common.sh@18 -- # local node= 00:05:23.590 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.590 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.590 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.590 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.590 18:14:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.590 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.590 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6959836 kB' 'MemAvailable: 9461448 kB' 'Buffers: 2684 kB' 'Cached: 2706232 kB' 'SwapCached: 0 kB' 'Active: 456120 kB' 'Inactive: 2370600 kB' 'Active(anon): 128292 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119404 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181120 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100856 kB' 'KernelStack: 6816 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.590 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.590 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:14:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.592 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.592 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.851 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.851 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.851 18:14:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.851 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.851 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.851 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.851 18:14:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.851 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.852 18:14:21 -- setup/common.sh@33 -- # echo 1025 00:05:23.852 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.852 18:14:21 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:23.852 18:14:21 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.852 18:14:21 -- setup/hugepages.sh@27 -- # local node 00:05:23.852 18:14:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.852 18:14:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:23.852 18:14:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.852 18:14:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.852 18:14:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.852 18:14:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.852 18:14:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.852 18:14:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.852 18:14:21 -- setup/common.sh@18 -- # local node=0 00:05:23.852 18:14:21 -- setup/common.sh@19 -- # local var val 00:05:23.852 18:14:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.852 18:14:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.852 18:14:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.852 18:14:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.852 18:14:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.852 18:14:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6961144 kB' 'MemUsed: 5277980 kB' 'SwapCached: 0 kB' 'Active: 456500 kB' 'Inactive: 2370600 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708916 kB' 'Mapped: 50836 kB' 'AnonPages: 119804 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181112 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.852 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.852 18:14:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # continue 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.853 18:14:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.853 18:14:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.853 18:14:21 -- setup/common.sh@33 -- # echo 0 00:05:23.853 18:14:21 -- setup/common.sh@33 -- # return 0 00:05:23.853 18:14:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.853 18:14:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.853 18:14:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.853 node0=1025 expecting 1025 00:05:23.853 18:14:21 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:23.853 18:14:21 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:23.853 00:05:23.853 real 0m0.525s 00:05:23.853 user 0m0.260s 00:05:23.853 sys 0m0.300s 00:05:23.853 18:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.853 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.853 ************************************ 00:05:23.853 END TEST odd_alloc 00:05:23.853 ************************************ 00:05:23.853 18:14:21 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:23.853 18:14:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.853 18:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.853 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.853 ************************************ 00:05:23.853 START TEST custom_alloc 00:05:23.853 ************************************ 00:05:23.853 18:14:21 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:23.853 18:14:21 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:23.853 18:14:21 -- setup/hugepages.sh@169 -- # local node 00:05:23.853 18:14:21 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:23.853 18:14:21 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:23.853 18:14:21 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:23.853 18:14:21 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:23.853 18:14:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:23.853 18:14:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:23.853 18:14:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:23.853 18:14:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.853 18:14:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.853 18:14:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.853 18:14:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.853 18:14:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.853 18:14:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.853 18:14:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:23.853 18:14:21 -- setup/hugepages.sh@83 -- # : 0 00:05:23.853 18:14:21 -- setup/hugepages.sh@84 -- # : 0 00:05:23.853 18:14:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:23.853 18:14:21 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:23.853 18:14:21 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:23.853 18:14:21 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:23.853 18:14:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.853 18:14:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.853 18:14:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.853 18:14:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.853 18:14:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.853 18:14:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.853 18:14:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:23.853 18:14:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:23.853 18:14:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:23.853 18:14:21 -- setup/hugepages.sh@78 -- # return 0 00:05:23.853 18:14:21 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:23.853 18:14:21 -- setup/hugepages.sh@187 -- # setup output 00:05:23.853 18:14:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.853 18:14:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.114 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.114 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.114 18:14:22 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:24.114 18:14:22 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:24.114 18:14:22 -- setup/hugepages.sh@89 -- # local node 00:05:24.114 18:14:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.114 18:14:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.114 18:14:22 -- setup/hugepages.sh@92 -- # local surp 00:05:24.114 18:14:22 -- setup/hugepages.sh@93 -- # local resv 00:05:24.114 18:14:22 -- setup/hugepages.sh@94 -- # local anon 00:05:24.114 18:14:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.114 18:14:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.114 18:14:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.114 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.114 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.114 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.114 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.114 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.114 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.114 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.114 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.114 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8008424 kB' 'MemAvailable: 10510040 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456608 kB' 'Inactive: 2370604 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 51012 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181160 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100896 kB' 'KernelStack: 6824 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.115 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.116 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.116 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.116 18:14:22 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.116 18:14:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.116 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.116 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.116 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.116 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.116 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.116 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.116 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.116 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.116 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8008424 kB' 'MemAvailable: 10510040 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456224 kB' 'Inactive: 2370604 kB' 'Active(anon): 128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181164 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100900 kB' 'KernelStack: 6768 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.117 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.118 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.118 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.118 18:14:22 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.118 18:14:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.118 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.118 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.118 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.118 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.118 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.118 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.118 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.118 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.118 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8008424 kB' 'MemAvailable: 10510040 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456096 kB' 'Inactive: 2370604 kB' 'Active(anon): 128268 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119608 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181172 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100908 kB' 'KernelStack: 6800 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.379 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.379 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.380 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.380 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.380 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.380 18:14:22 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.380 nr_hugepages=512 00:05:24.380 18:14:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:24.380 resv_hugepages=0 00:05:24.380 18:14:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.380 surplus_hugepages=0 00:05:24.380 18:14:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.380 anon_hugepages=0 00:05:24.380 18:14:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.380 18:14:22 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.380 18:14:22 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:24.380 18:14:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.380 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.380 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.380 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.380 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.380 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.380 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.380 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.380 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.380 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.380 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8008424 kB' 'MemAvailable: 10510040 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456076 kB' 'Inactive: 2370604 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181172 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100908 kB' 'KernelStack: 6784 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.381 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.381 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.382 18:14:22 -- setup/common.sh@33 -- # echo 512 00:05:24.382 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.382 18:14:22 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.382 18:14:22 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.382 18:14:22 -- setup/hugepages.sh@27 -- # local node 00:05:24.382 18:14:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.382 18:14:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.382 18:14:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.382 18:14:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.382 18:14:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.382 18:14:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.382 18:14:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.382 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.382 18:14:22 -- setup/common.sh@18 -- # local node=0 00:05:24.382 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.382 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.382 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.382 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.382 18:14:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.382 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.382 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 8008720 kB' 'MemUsed: 4230404 kB' 'SwapCached: 0 kB' 'Active: 456132 kB' 'Inactive: 2370604 kB' 'Active(anon): 128304 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708920 kB' 'Mapped: 50836 kB' 'AnonPages: 119744 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181180 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.382 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.382 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.383 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.383 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.383 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.383 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.383 18:14:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.383 18:14:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.383 18:14:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.383 18:14:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.383 node0=512 expecting 512 00:05:24.383 18:14:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:24.383 18:14:22 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:24.383 00:05:24.383 real 0m0.532s 00:05:24.383 user 0m0.270s 00:05:24.383 sys 0m0.298s 00:05:24.383 18:14:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.383 18:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 ************************************ 00:05:24.383 END TEST custom_alloc 00:05:24.383 ************************************ 00:05:24.383 18:14:22 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:24.383 18:14:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.383 18:14:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.383 18:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 ************************************ 00:05:24.383 START TEST no_shrink_alloc 00:05:24.383 ************************************ 00:05:24.383 18:14:22 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:24.384 18:14:22 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:24.384 18:14:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.384 18:14:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:24.384 18:14:22 -- setup/hugepages.sh@51 -- # shift 00:05:24.384 18:14:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:24.384 18:14:22 -- setup/hugepages.sh@52 -- # local node_ids 00:05:24.384 18:14:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.384 18:14:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.384 18:14:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:24.384 18:14:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:24.384 18:14:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.384 18:14:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.384 18:14:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.384 18:14:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.384 18:14:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.384 18:14:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:24.384 18:14:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:24.384 18:14:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:24.384 18:14:22 -- setup/hugepages.sh@73 -- # return 0 00:05:24.384 18:14:22 -- setup/hugepages.sh@198 -- # setup output 00:05:24.384 18:14:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.384 18:14:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.643 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.643 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.643 18:14:22 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:24.643 18:14:22 -- setup/hugepages.sh@89 -- # local node 00:05:24.643 18:14:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.643 18:14:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.643 18:14:22 -- setup/hugepages.sh@92 -- # local surp 00:05:24.643 18:14:22 -- setup/hugepages.sh@93 -- # local resv 00:05:24.643 18:14:22 -- setup/hugepages.sh@94 -- # local anon 00:05:24.643 18:14:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.643 18:14:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.643 18:14:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.643 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.643 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.643 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.643 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.643 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.643 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.643 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.643 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.643 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.643 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6957516 kB' 'MemAvailable: 9459132 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456684 kB' 'Inactive: 2370604 kB' 'Active(anon): 128856 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 51076 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181172 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100908 kB' 'KernelStack: 6808 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 324256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.644 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.644 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.907 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.907 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.908 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.908 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.908 18:14:22 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.908 18:14:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.908 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.908 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.908 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.908 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.908 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.908 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.908 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.908 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.908 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6957768 kB' 'MemAvailable: 9459384 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456512 kB' 'Inactive: 2370604 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119996 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181188 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100924 kB' 'KernelStack: 6800 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.908 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.908 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.909 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.909 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.909 18:14:22 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.909 18:14:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.909 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.909 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.909 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.909 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.909 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.909 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.909 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.909 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.909 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6957768 kB' 'MemAvailable: 9459384 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456312 kB' 'Inactive: 2370604 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181196 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100932 kB' 'KernelStack: 6800 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.909 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.909 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.910 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.910 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.911 18:14:22 -- setup/common.sh@33 -- # echo 0 00:05:24.911 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.911 18:14:22 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.911 nr_hugepages=1024 00:05:24.911 18:14:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.911 resv_hugepages=0 00:05:24.911 18:14:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.911 surplus_hugepages=0 00:05:24.911 18:14:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.911 anon_hugepages=0 00:05:24.911 18:14:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.911 18:14:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.911 18:14:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.911 18:14:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.911 18:14:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.911 18:14:22 -- setup/common.sh@18 -- # local node= 00:05:24.911 18:14:22 -- setup/common.sh@19 -- # local var val 00:05:24.911 18:14:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.911 18:14:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.911 18:14:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.911 18:14:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.911 18:14:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.911 18:14:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6957768 kB' 'MemAvailable: 9459384 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456296 kB' 'Inactive: 2370604 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119572 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181196 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100932 kB' 'KernelStack: 6784 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.911 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.911 18:14:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # continue 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.912 18:14:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.912 18:14:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.912 18:14:22 -- setup/common.sh@33 -- # echo 1024 00:05:24.913 18:14:22 -- setup/common.sh@33 -- # return 0 00:05:24.913 18:14:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.913 18:14:22 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.913 18:14:22 -- setup/hugepages.sh@27 -- # local node 00:05:24.913 18:14:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.913 18:14:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.913 18:14:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.913 18:14:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.913 18:14:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.913 18:14:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.913 18:14:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.913 18:14:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.913 18:14:23 -- setup/common.sh@18 -- # local node=0 00:05:24.913 18:14:23 -- setup/common.sh@19 -- # local var val 00:05:24.913 18:14:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.913 18:14:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.913 18:14:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.913 18:14:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.913 18:14:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.913 18:14:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6957768 kB' 'MemUsed: 5281356 kB' 'SwapCached: 0 kB' 'Active: 456324 kB' 'Inactive: 2370604 kB' 'Active(anon): 128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708920 kB' 'Mapped: 50844 kB' 'AnonPages: 119860 kB' 'Shmem: 10488 kB' 'KernelStack: 6800 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181196 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.913 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.913 18:14:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # continue 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.914 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.914 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.914 18:14:23 -- setup/common.sh@33 -- # echo 0 00:05:24.914 18:14:23 -- setup/common.sh@33 -- # return 0 00:05:24.914 18:14:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.914 18:14:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.914 18:14:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.914 18:14:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.914 node0=1024 expecting 1024 00:05:24.914 18:14:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.914 18:14:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.914 18:14:23 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:24.914 18:14:23 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:24.914 18:14:23 -- setup/hugepages.sh@202 -- # setup output 00:05:24.914 18:14:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.914 18:14:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.175 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.175 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.175 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:25.175 18:14:23 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:25.175 18:14:23 -- setup/hugepages.sh@89 -- # local node 00:05:25.175 18:14:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.175 18:14:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.175 18:14:23 -- setup/hugepages.sh@92 -- # local surp 00:05:25.175 18:14:23 -- setup/hugepages.sh@93 -- # local resv 00:05:25.175 18:14:23 -- setup/hugepages.sh@94 -- # local anon 00:05:25.175 18:14:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.175 18:14:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.175 18:14:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.175 18:14:23 -- setup/common.sh@18 -- # local node= 00:05:25.175 18:14:23 -- setup/common.sh@19 -- # local var val 00:05:25.175 18:14:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.175 18:14:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.175 18:14:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.175 18:14:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.175 18:14:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.175 18:14:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6952048 kB' 'MemAvailable: 9453664 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456908 kB' 'Inactive: 2370604 kB' 'Active(anon): 129080 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120208 kB' 'Mapped: 51024 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181172 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100908 kB' 'KernelStack: 6964 kB' 'PageTables: 4732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.176 18:14:23 -- setup/common.sh@33 -- # echo 0 00:05:25.176 18:14:23 -- setup/common.sh@33 -- # return 0 00:05:25.176 18:14:23 -- setup/hugepages.sh@97 -- # anon=0 00:05:25.176 18:14:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.176 18:14:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.176 18:14:23 -- setup/common.sh@18 -- # local node= 00:05:25.176 18:14:23 -- setup/common.sh@19 -- # local var val 00:05:25.176 18:14:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.176 18:14:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.176 18:14:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.176 18:14:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.176 18:14:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.176 18:14:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6952056 kB' 'MemAvailable: 9453672 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456480 kB' 'Inactive: 2370604 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 50976 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181160 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100896 kB' 'KernelStack: 6820 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.176 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.177 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.440 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.440 18:14:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.441 18:14:23 -- setup/common.sh@33 -- # echo 0 00:05:25.441 18:14:23 -- setup/common.sh@33 -- # return 0 00:05:25.441 18:14:23 -- setup/hugepages.sh@99 -- # surp=0 00:05:25.441 18:14:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.441 18:14:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.441 18:14:23 -- setup/common.sh@18 -- # local node= 00:05:25.441 18:14:23 -- setup/common.sh@19 -- # local var val 00:05:25.441 18:14:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.441 18:14:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.441 18:14:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.441 18:14:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.441 18:14:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.441 18:14:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6952424 kB' 'MemAvailable: 9454040 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456012 kB' 'Inactive: 2370604 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181152 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100888 kB' 'KernelStack: 6828 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.441 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.441 18:14:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.442 18:14:23 -- setup/common.sh@33 -- # echo 0 00:05:25.442 18:14:23 -- setup/common.sh@33 -- # return 0 00:05:25.442 18:14:23 -- setup/hugepages.sh@100 -- # resv=0 00:05:25.442 nr_hugepages=1024 00:05:25.442 18:14:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.442 resv_hugepages=0 00:05:25.442 18:14:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.442 surplus_hugepages=0 00:05:25.442 18:14:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.442 anon_hugepages=0 00:05:25.442 18:14:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.442 18:14:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.442 18:14:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.442 18:14:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.442 18:14:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.442 18:14:23 -- setup/common.sh@18 -- # local node= 00:05:25.442 18:14:23 -- setup/common.sh@19 -- # local var val 00:05:25.442 18:14:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.442 18:14:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.442 18:14:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.442 18:14:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.442 18:14:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.442 18:14:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6952424 kB' 'MemAvailable: 9454040 kB' 'Buffers: 2684 kB' 'Cached: 2706236 kB' 'SwapCached: 0 kB' 'Active: 456024 kB' 'Inactive: 2370604 kB' 'Active(anon): 128196 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 80264 kB' 'Slab: 181152 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100888 kB' 'KernelStack: 6828 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.442 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.442 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.443 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.443 18:14:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.444 18:14:23 -- setup/common.sh@33 -- # echo 1024 00:05:25.444 18:14:23 -- setup/common.sh@33 -- # return 0 00:05:25.444 18:14:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.444 18:14:23 -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.444 18:14:23 -- setup/hugepages.sh@27 -- # local node 00:05:25.444 18:14:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.444 18:14:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.444 18:14:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.444 18:14:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.444 18:14:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.444 18:14:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.444 18:14:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.444 18:14:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.444 18:14:23 -- setup/common.sh@18 -- # local node=0 00:05:25.444 18:14:23 -- setup/common.sh@19 -- # local var val 00:05:25.444 18:14:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.444 18:14:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.444 18:14:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.444 18:14:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.444 18:14:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.444 18:14:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239124 kB' 'MemFree: 6953148 kB' 'MemUsed: 5285976 kB' 'SwapCached: 0 kB' 'Active: 453804 kB' 'Inactive: 2370604 kB' 'Active(anon): 125976 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370604 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708920 kB' 'Mapped: 49968 kB' 'AnonPages: 117316 kB' 'Shmem: 10488 kB' 'KernelStack: 6748 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80264 kB' 'Slab: 181116 kB' 'SReclaimable: 80264 kB' 'SUnreclaim: 100852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.444 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.444 18:14:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # continue 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.445 18:14:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.445 18:14:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.445 18:14:23 -- setup/common.sh@33 -- # echo 0 00:05:25.445 18:14:23 -- setup/common.sh@33 -- # return 0 00:05:25.445 18:14:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.445 18:14:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.445 18:14:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.445 18:14:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.445 node0=1024 expecting 1024 00:05:25.445 18:14:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.445 18:14:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.445 00:05:25.445 real 0m1.058s 00:05:25.445 user 0m0.495s 00:05:25.445 sys 0m0.602s 00:05:25.445 18:14:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.445 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:25.445 ************************************ 00:05:25.445 END TEST no_shrink_alloc 00:05:25.445 ************************************ 00:05:25.445 18:14:23 -- setup/hugepages.sh@217 -- # clear_hp 00:05:25.445 18:14:23 -- setup/hugepages.sh@37 -- # local node hp 00:05:25.445 18:14:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.445 18:14:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.445 18:14:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:25.445 18:14:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.445 18:14:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:25.445 18:14:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:25.445 18:14:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:25.445 00:05:25.445 real 0m4.667s 00:05:25.445 user 0m2.217s 00:05:25.445 sys 0m2.543s 00:05:25.445 18:14:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.445 ************************************ 00:05:25.445 END TEST hugepages 00:05:25.445 ************************************ 00:05:25.445 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:25.445 18:14:23 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.445 18:14:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.445 18:14:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.445 18:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:25.445 ************************************ 00:05:25.445 START TEST driver 00:05:25.445 ************************************ 00:05:25.445 18:14:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.705 * Looking for test storage... 00:05:25.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.705 18:14:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.705 18:14:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.705 18:14:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.705 18:14:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.705 18:14:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.705 18:14:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.705 18:14:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.705 18:14:23 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.705 18:14:23 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.705 18:14:23 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.705 18:14:23 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.705 18:14:23 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.705 18:14:23 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.705 18:14:23 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.706 18:14:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.706 18:14:23 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.706 18:14:23 -- scripts/common.sh@344 -- # : 1 00:05:25.706 18:14:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.706 18:14:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.706 18:14:23 -- scripts/common.sh@364 -- # decimal 1 00:05:25.706 18:14:23 -- scripts/common.sh@352 -- # local d=1 00:05:25.706 18:14:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.706 18:14:23 -- scripts/common.sh@354 -- # echo 1 00:05:25.706 18:14:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.706 18:14:23 -- scripts/common.sh@365 -- # decimal 2 00:05:25.706 18:14:23 -- scripts/common.sh@352 -- # local d=2 00:05:25.706 18:14:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.706 18:14:23 -- scripts/common.sh@354 -- # echo 2 00:05:25.706 18:14:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.706 18:14:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.706 18:14:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.706 18:14:23 -- scripts/common.sh@367 -- # return 0 00:05:25.706 18:14:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.706 18:14:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.706 --rc genhtml_branch_coverage=1 00:05:25.706 --rc genhtml_function_coverage=1 00:05:25.706 --rc genhtml_legend=1 00:05:25.706 --rc geninfo_all_blocks=1 00:05:25.706 --rc geninfo_unexecuted_blocks=1 00:05:25.706 00:05:25.706 ' 00:05:25.706 18:14:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.706 --rc genhtml_branch_coverage=1 00:05:25.706 --rc genhtml_function_coverage=1 00:05:25.706 --rc genhtml_legend=1 00:05:25.706 --rc geninfo_all_blocks=1 00:05:25.706 --rc geninfo_unexecuted_blocks=1 00:05:25.706 00:05:25.706 ' 00:05:25.706 18:14:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.706 --rc genhtml_branch_coverage=1 00:05:25.706 --rc genhtml_function_coverage=1 00:05:25.706 --rc genhtml_legend=1 00:05:25.706 --rc geninfo_all_blocks=1 00:05:25.706 --rc geninfo_unexecuted_blocks=1 00:05:25.706 00:05:25.706 ' 00:05:25.706 18:14:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.706 --rc genhtml_branch_coverage=1 00:05:25.706 --rc genhtml_function_coverage=1 00:05:25.706 --rc genhtml_legend=1 00:05:25.706 --rc geninfo_all_blocks=1 00:05:25.706 --rc geninfo_unexecuted_blocks=1 00:05:25.706 00:05:25.706 ' 00:05:25.706 18:14:23 -- setup/driver.sh@68 -- # setup reset 00:05:25.706 18:14:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.706 18:14:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.275 18:14:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:26.275 18:14:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.275 18:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.275 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.275 ************************************ 00:05:26.275 START TEST guess_driver 00:05:26.275 ************************************ 00:05:26.275 18:14:24 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:26.275 18:14:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:26.275 18:14:24 -- setup/driver.sh@47 -- # local fail=0 00:05:26.275 18:14:24 -- setup/driver.sh@49 -- # pick_driver 00:05:26.275 18:14:24 -- setup/driver.sh@36 -- # vfio 00:05:26.275 18:14:24 -- setup/driver.sh@21 -- # local iommu_grups 00:05:26.275 18:14:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:26.275 18:14:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:26.275 18:14:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:26.275 18:14:24 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:26.275 18:14:24 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:26.275 18:14:24 -- setup/driver.sh@32 -- # return 1 00:05:26.275 18:14:24 -- setup/driver.sh@38 -- # uio 00:05:26.275 18:14:24 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:26.275 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:26.275 18:14:24 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:26.275 Looking for driver=uio_pci_generic 00:05:26.275 18:14:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:26.275 18:14:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:26.275 18:14:24 -- setup/driver.sh@45 -- # setup output config 00:05:26.275 18:14:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.275 18:14:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.843 18:14:25 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:26.843 18:14:25 -- setup/driver.sh@58 -- # continue 00:05:26.843 18:14:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.102 18:14:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.102 18:14:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:27.102 18:14:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.102 18:14:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.102 18:14:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:27.102 18:14:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.102 18:14:25 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:27.102 18:14:25 -- setup/driver.sh@65 -- # setup reset 00:05:27.102 18:14:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.102 18:14:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.671 00:05:27.671 real 0m1.405s 00:05:27.671 user 0m0.536s 00:05:27.671 sys 0m0.871s 00:05:27.671 18:14:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.671 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.671 ************************************ 00:05:27.671 END TEST guess_driver 00:05:27.671 ************************************ 00:05:27.671 00:05:27.671 real 0m2.162s 00:05:27.671 user 0m0.852s 00:05:27.671 sys 0m1.382s 00:05:27.671 18:14:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.671 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.671 ************************************ 00:05:27.671 END TEST driver 00:05:27.671 ************************************ 00:05:27.671 18:14:25 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:27.671 18:14:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.671 18:14:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.671 18:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.671 ************************************ 00:05:27.671 START TEST devices 00:05:27.671 ************************************ 00:05:27.671 18:14:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:27.671 * Looking for test storage... 00:05:27.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.931 18:14:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.931 18:14:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.931 18:14:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.931 18:14:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.931 18:14:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.931 18:14:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.931 18:14:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.931 18:14:26 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.931 18:14:26 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.931 18:14:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.931 18:14:26 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.931 18:14:26 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.931 18:14:26 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.931 18:14:26 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.931 18:14:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.931 18:14:26 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.931 18:14:26 -- scripts/common.sh@344 -- # : 1 00:05:27.931 18:14:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.931 18:14:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.931 18:14:26 -- scripts/common.sh@364 -- # decimal 1 00:05:27.931 18:14:26 -- scripts/common.sh@352 -- # local d=1 00:05:27.931 18:14:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.931 18:14:26 -- scripts/common.sh@354 -- # echo 1 00:05:27.931 18:14:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.932 18:14:26 -- scripts/common.sh@365 -- # decimal 2 00:05:27.932 18:14:26 -- scripts/common.sh@352 -- # local d=2 00:05:27.932 18:14:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.932 18:14:26 -- scripts/common.sh@354 -- # echo 2 00:05:27.932 18:14:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.932 18:14:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.932 18:14:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.932 18:14:26 -- scripts/common.sh@367 -- # return 0 00:05:27.932 18:14:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.932 18:14:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.932 --rc genhtml_branch_coverage=1 00:05:27.932 --rc genhtml_function_coverage=1 00:05:27.932 --rc genhtml_legend=1 00:05:27.932 --rc geninfo_all_blocks=1 00:05:27.932 --rc geninfo_unexecuted_blocks=1 00:05:27.932 00:05:27.932 ' 00:05:27.932 18:14:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.932 --rc genhtml_branch_coverage=1 00:05:27.932 --rc genhtml_function_coverage=1 00:05:27.932 --rc genhtml_legend=1 00:05:27.932 --rc geninfo_all_blocks=1 00:05:27.932 --rc geninfo_unexecuted_blocks=1 00:05:27.932 00:05:27.932 ' 00:05:27.932 18:14:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.932 --rc genhtml_branch_coverage=1 00:05:27.932 --rc genhtml_function_coverage=1 00:05:27.932 --rc genhtml_legend=1 00:05:27.932 --rc geninfo_all_blocks=1 00:05:27.932 --rc geninfo_unexecuted_blocks=1 00:05:27.932 00:05:27.932 ' 00:05:27.932 18:14:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.932 --rc genhtml_branch_coverage=1 00:05:27.932 --rc genhtml_function_coverage=1 00:05:27.932 --rc genhtml_legend=1 00:05:27.932 --rc geninfo_all_blocks=1 00:05:27.932 --rc geninfo_unexecuted_blocks=1 00:05:27.932 00:05:27.932 ' 00:05:27.932 18:14:26 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:27.932 18:14:26 -- setup/devices.sh@192 -- # setup reset 00:05:27.932 18:14:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.932 18:14:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.869 18:14:26 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:28.870 18:14:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:28.870 18:14:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:28.870 18:14:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:28.870 18:14:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.870 18:14:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:28.870 18:14:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:28.870 18:14:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.870 18:14:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:28.870 18:14:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:28.870 18:14:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.870 18:14:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:28.870 18:14:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:28.870 18:14:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:28.870 18:14:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:28.870 18:14:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:28.870 18:14:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:28.870 18:14:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:28.870 18:14:26 -- setup/devices.sh@196 -- # blocks=() 00:05:28.870 18:14:26 -- setup/devices.sh@196 -- # declare -a blocks 00:05:28.870 18:14:26 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:28.870 18:14:26 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:28.870 18:14:26 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:28.870 18:14:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:28.870 18:14:26 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:28.870 18:14:26 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:28.870 18:14:26 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:28.870 18:14:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:28.870 18:14:26 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:28.870 18:14:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:28.870 No valid GPT data, bailing 00:05:28.870 18:14:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:28.870 18:14:26 -- scripts/common.sh@393 -- # pt= 00:05:28.870 18:14:26 -- scripts/common.sh@394 -- # return 1 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:28.870 18:14:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:28.870 18:14:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:28.870 18:14:26 -- setup/common.sh@80 -- # echo 5368709120 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:28.870 18:14:26 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:28.870 18:14:26 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:28.870 18:14:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:28.870 18:14:26 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:28.870 18:14:26 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:28.870 18:14:26 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:28.870 18:14:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:28.870 18:14:26 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:28.870 18:14:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:28.870 No valid GPT data, bailing 00:05:28.870 18:14:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:28.870 18:14:26 -- scripts/common.sh@393 -- # pt= 00:05:28.870 18:14:26 -- scripts/common.sh@394 -- # return 1 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:28.870 18:14:26 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:28.870 18:14:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:28.870 18:14:26 -- setup/common.sh@80 -- # echo 4294967296 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:28.870 18:14:26 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:28.870 18:14:26 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:28.870 18:14:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:28.870 18:14:26 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:28.870 18:14:26 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:28.870 18:14:26 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:28.870 18:14:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:28.870 18:14:26 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:28.870 18:14:26 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:28.870 18:14:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:28.870 No valid GPT data, bailing 00:05:28.870 18:14:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:28.870 18:14:27 -- scripts/common.sh@393 -- # pt= 00:05:28.870 18:14:27 -- scripts/common.sh@394 -- # return 1 00:05:28.870 18:14:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:28.870 18:14:27 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:28.870 18:14:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:28.870 18:14:27 -- setup/common.sh@80 -- # echo 4294967296 00:05:28.870 18:14:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:28.870 18:14:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:28.870 18:14:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:28.870 18:14:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:28.870 18:14:27 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:28.870 18:14:27 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:28.870 18:14:27 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:28.870 18:14:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:28.870 18:14:27 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:28.870 18:14:27 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:28.870 18:14:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:28.870 No valid GPT data, bailing 00:05:28.870 18:14:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:28.870 18:14:27 -- scripts/common.sh@393 -- # pt= 00:05:28.870 18:14:27 -- scripts/common.sh@394 -- # return 1 00:05:28.870 18:14:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:28.870 18:14:27 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:28.870 18:14:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:28.870 18:14:27 -- setup/common.sh@80 -- # echo 4294967296 00:05:28.870 18:14:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:28.870 18:14:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:28.870 18:14:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:28.870 18:14:27 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:28.870 18:14:27 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:28.870 18:14:27 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:28.870 18:14:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.870 18:14:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.870 18:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.870 ************************************ 00:05:28.870 START TEST nvme_mount 00:05:28.870 ************************************ 00:05:28.870 18:14:27 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:28.870 18:14:27 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:28.870 18:14:27 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:28.870 18:14:27 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.870 18:14:27 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.870 18:14:27 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:28.870 18:14:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:28.870 18:14:27 -- setup/common.sh@40 -- # local part_no=1 00:05:28.870 18:14:27 -- setup/common.sh@41 -- # local size=1073741824 00:05:28.870 18:14:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:28.870 18:14:27 -- setup/common.sh@44 -- # parts=() 00:05:28.870 18:14:27 -- setup/common.sh@44 -- # local parts 00:05:28.870 18:14:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:28.870 18:14:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.870 18:14:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.870 18:14:27 -- setup/common.sh@46 -- # (( part++ )) 00:05:28.870 18:14:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.870 18:14:27 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:28.870 18:14:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:28.870 18:14:27 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:30.247 Creating new GPT entries in memory. 00:05:30.247 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.247 other utilities. 00:05:30.247 18:14:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.247 18:14:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.247 18:14:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.247 18:14:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.247 18:14:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:31.192 Creating new GPT entries in memory. 00:05:31.192 The operation has completed successfully. 00:05:31.192 18:14:29 -- setup/common.sh@57 -- # (( part++ )) 00:05:31.192 18:14:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.192 18:14:29 -- setup/common.sh@62 -- # wait 63854 00:05:31.192 18:14:29 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.192 18:14:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:31.192 18:14:29 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.192 18:14:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:31.192 18:14:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:31.192 18:14:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.192 18:14:29 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:31.192 18:14:29 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.192 18:14:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:31.192 18:14:29 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.192 18:14:29 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:31.192 18:14:29 -- setup/devices.sh@53 -- # local found=0 00:05:31.192 18:14:29 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.192 18:14:29 -- setup/devices.sh@56 -- # : 00:05:31.192 18:14:29 -- setup/devices.sh@59 -- # local pci status 00:05:31.192 18:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.192 18:14:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.192 18:14:29 -- setup/devices.sh@47 -- # setup output config 00:05:31.192 18:14:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.192 18:14:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.192 18:14:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.192 18:14:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:31.192 18:14:29 -- setup/devices.sh@63 -- # found=1 00:05:31.192 18:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.192 18:14:29 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.192 18:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.452 18:14:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.452 18:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.711 18:14:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.711 18:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.711 18:14:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.711 18:14:29 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:31.711 18:14:29 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.711 18:14:29 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.711 18:14:29 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:31.711 18:14:29 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:31.711 18:14:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.711 18:14:29 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.711 18:14:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.711 18:14:29 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.711 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.711 18:14:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.711 18:14:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.970 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.970 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.970 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.970 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.970 18:14:30 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:31.970 18:14:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:31.970 18:14:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.970 18:14:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:31.970 18:14:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:31.970 18:14:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.970 18:14:30 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:31.970 18:14:30 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.971 18:14:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:31.971 18:14:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.971 18:14:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:31.971 18:14:30 -- setup/devices.sh@53 -- # local found=0 00:05:31.971 18:14:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.971 18:14:30 -- setup/devices.sh@56 -- # : 00:05:31.971 18:14:30 -- setup/devices.sh@59 -- # local pci status 00:05:31.971 18:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.971 18:14:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.971 18:14:30 -- setup/devices.sh@47 -- # setup output config 00:05:31.971 18:14:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.971 18:14:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.230 18:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.230 18:14:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:32.230 18:14:30 -- setup/devices.sh@63 -- # found=1 00:05:32.230 18:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.230 18:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.230 18:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.489 18:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.489 18:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.489 18:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.489 18:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.748 18:14:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.748 18:14:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:32.748 18:14:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.748 18:14:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.748 18:14:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.748 18:14:30 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.748 18:14:30 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:32.748 18:14:30 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:32.748 18:14:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:32.748 18:14:30 -- setup/devices.sh@50 -- # local mount_point= 00:05:32.748 18:14:30 -- setup/devices.sh@51 -- # local test_file= 00:05:32.748 18:14:30 -- setup/devices.sh@53 -- # local found=0 00:05:32.748 18:14:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:32.748 18:14:30 -- setup/devices.sh@59 -- # local pci status 00:05:32.748 18:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.748 18:14:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:32.748 18:14:30 -- setup/devices.sh@47 -- # setup output config 00:05:32.748 18:14:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.748 18:14:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.007 18:14:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.007 18:14:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:33.007 18:14:31 -- setup/devices.sh@63 -- # found=1 00:05:33.007 18:14:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.007 18:14:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.007 18:14:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.267 18:14:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.267 18:14:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.267 18:14:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.267 18:14:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.267 18:14:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.267 18:14:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.267 18:14:31 -- setup/devices.sh@68 -- # return 0 00:05:33.267 18:14:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:33.267 18:14:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.267 18:14:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.267 18:14:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.267 18:14:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.267 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.267 00:05:33.267 real 0m4.427s 00:05:33.267 user 0m1.032s 00:05:33.267 sys 0m1.085s 00:05:33.267 18:14:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.267 18:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.267 ************************************ 00:05:33.267 END TEST nvme_mount 00:05:33.267 ************************************ 00:05:33.527 18:14:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:33.527 18:14:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.527 18:14:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.527 18:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.527 ************************************ 00:05:33.527 START TEST dm_mount 00:05:33.527 ************************************ 00:05:33.527 18:14:31 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:33.527 18:14:31 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:33.527 18:14:31 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:33.527 18:14:31 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:33.527 18:14:31 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:33.527 18:14:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:33.527 18:14:31 -- setup/common.sh@40 -- # local part_no=2 00:05:33.527 18:14:31 -- setup/common.sh@41 -- # local size=1073741824 00:05:33.527 18:14:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:33.527 18:14:31 -- setup/common.sh@44 -- # parts=() 00:05:33.527 18:14:31 -- setup/common.sh@44 -- # local parts 00:05:33.527 18:14:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:33.527 18:14:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.527 18:14:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.527 18:14:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:33.527 18:14:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.527 18:14:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.527 18:14:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:33.527 18:14:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.527 18:14:31 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:33.527 18:14:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:33.527 18:14:31 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:34.464 Creating new GPT entries in memory. 00:05:34.464 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:34.464 other utilities. 00:05:34.464 18:14:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:34.464 18:14:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.464 18:14:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.464 18:14:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.464 18:14:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:35.401 Creating new GPT entries in memory. 00:05:35.401 The operation has completed successfully. 00:05:35.401 18:14:33 -- setup/common.sh@57 -- # (( part++ )) 00:05:35.401 18:14:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.401 18:14:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.401 18:14:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.401 18:14:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:36.793 The operation has completed successfully. 00:05:36.793 18:14:34 -- setup/common.sh@57 -- # (( part++ )) 00:05:36.793 18:14:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.793 18:14:34 -- setup/common.sh@62 -- # wait 64308 00:05:36.793 18:14:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:36.793 18:14:34 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.793 18:14:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:36.793 18:14:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:36.793 18:14:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:36.793 18:14:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:36.793 18:14:34 -- setup/devices.sh@161 -- # break 00:05:36.793 18:14:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:36.793 18:14:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:36.793 18:14:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:36.793 18:14:34 -- setup/devices.sh@166 -- # dm=dm-0 00:05:36.793 18:14:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:36.793 18:14:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:36.793 18:14:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.794 18:14:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:36.794 18:14:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.794 18:14:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:36.794 18:14:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:36.794 18:14:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.794 18:14:34 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:36.794 18:14:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:36.794 18:14:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:36.794 18:14:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.794 18:14:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:36.794 18:14:34 -- setup/devices.sh@53 -- # local found=0 00:05:36.794 18:14:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:36.794 18:14:34 -- setup/devices.sh@56 -- # : 00:05:36.794 18:14:34 -- setup/devices.sh@59 -- # local pci status 00:05:36.794 18:14:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.794 18:14:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:36.794 18:14:34 -- setup/devices.sh@47 -- # setup output config 00:05:36.794 18:14:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.794 18:14:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.794 18:14:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.794 18:14:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:36.794 18:14:34 -- setup/devices.sh@63 -- # found=1 00:05:36.794 18:14:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.794 18:14:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.794 18:14:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.053 18:14:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.053 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.053 18:14:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.053 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.313 18:14:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.313 18:14:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:37.313 18:14:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.313 18:14:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:37.313 18:14:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:37.313 18:14:35 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.313 18:14:35 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:37.313 18:14:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:37.313 18:14:35 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:37.313 18:14:35 -- setup/devices.sh@50 -- # local mount_point= 00:05:37.313 18:14:35 -- setup/devices.sh@51 -- # local test_file= 00:05:37.313 18:14:35 -- setup/devices.sh@53 -- # local found=0 00:05:37.313 18:14:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:37.313 18:14:35 -- setup/devices.sh@59 -- # local pci status 00:05:37.313 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.313 18:14:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:37.313 18:14:35 -- setup/devices.sh@47 -- # setup output config 00:05:37.313 18:14:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.313 18:14:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.313 18:14:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.313 18:14:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:37.313 18:14:35 -- setup/devices.sh@63 -- # found=1 00:05:37.313 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.313 18:14:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.313 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 18:14:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.882 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 18:14:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.882 18:14:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 18:14:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.882 18:14:36 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.882 18:14:36 -- setup/devices.sh@68 -- # return 0 00:05:37.882 18:14:36 -- setup/devices.sh@187 -- # cleanup_dm 00:05:37.882 18:14:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.882 18:14:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:37.882 18:14:36 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:37.882 18:14:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.882 18:14:36 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:37.882 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.882 18:14:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:37.882 18:14:36 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:37.882 00:05:37.882 real 0m4.535s 00:05:37.882 user 0m0.691s 00:05:37.882 sys 0m0.782s 00:05:37.882 18:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.882 18:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.882 ************************************ 00:05:37.882 END TEST dm_mount 00:05:37.882 ************************************ 00:05:37.882 18:14:36 -- setup/devices.sh@1 -- # cleanup 00:05:37.882 18:14:36 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:37.882 18:14:36 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.882 18:14:36 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.882 18:14:36 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:38.141 18:14:36 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.141 18:14:36 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:38.400 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:38.400 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:38.400 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:38.400 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:38.400 18:14:36 -- setup/devices.sh@12 -- # cleanup_dm 00:05:38.400 18:14:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.400 18:14:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:38.400 18:14:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.400 18:14:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:38.400 18:14:36 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.400 18:14:36 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:38.400 00:05:38.400 real 0m10.573s 00:05:38.400 user 0m2.453s 00:05:38.400 sys 0m2.471s 00:05:38.400 18:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.400 18:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.400 ************************************ 00:05:38.400 END TEST devices 00:05:38.400 ************************************ 00:05:38.400 00:05:38.400 real 0m22.084s 00:05:38.400 user 0m7.665s 00:05:38.400 sys 0m8.913s 00:05:38.400 18:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.400 18:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.400 ************************************ 00:05:38.400 END TEST setup.sh 00:05:38.400 ************************************ 00:05:38.400 18:14:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.400 Hugepages 00:05:38.400 node hugesize free / total 00:05:38.400 node0 1048576kB 0 / 0 00:05:38.400 node0 2048kB 2048 / 2048 00:05:38.400 00:05:38.400 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.660 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:38.660 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:38.660 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:38.660 18:14:36 -- spdk/autotest.sh@128 -- # uname -s 00:05:38.660 18:14:36 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:38.660 18:14:36 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:38.660 18:14:36 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.628 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.628 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.628 18:14:37 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:40.564 18:14:38 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:40.564 18:14:38 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:40.564 18:14:38 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.564 18:14:38 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:40.564 18:14:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:40.564 18:14:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:40.564 18:14:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.564 18:14:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.564 18:14:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:40.564 18:14:38 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:40.564 18:14:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:40.564 18:14:38 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.131 Waiting for block devices as requested 00:05:41.131 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.131 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.131 18:14:39 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:41.131 18:14:39 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:41.131 18:14:39 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:41.131 18:14:39 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:41.131 18:14:39 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:41.131 18:14:39 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:41.131 18:14:39 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:41.131 18:14:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:41.131 18:14:39 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:41.131 18:14:39 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:41.131 18:14:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:41.131 18:14:39 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:41.131 18:14:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.390 18:14:39 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:41.390 18:14:39 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:41.390 18:14:39 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:41.390 18:14:39 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:41.390 18:14:39 -- common/autotest_common.sh@1552 -- # continue 00:05:41.390 18:14:39 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:41.390 18:14:39 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:41.390 18:14:39 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:41.390 18:14:39 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:41.390 18:14:39 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:41.390 18:14:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:41.390 18:14:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.390 18:14:39 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:41.390 18:14:39 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:41.390 18:14:39 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:41.390 18:14:39 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:41.390 18:14:39 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:41.390 18:14:39 -- common/autotest_common.sh@1552 -- # continue 00:05:41.390 18:14:39 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:41.390 18:14:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.390 18:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.390 18:14:39 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:41.390 18:14:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.390 18:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.390 18:14:39 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.957 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.216 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.216 18:14:40 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:42.216 18:14:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.216 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.216 18:14:40 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:42.216 18:14:40 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:42.216 18:14:40 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.216 18:14:40 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:42.216 18:14:40 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:42.217 18:14:40 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:42.217 18:14:40 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:42.217 18:14:40 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:42.217 18:14:40 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.217 18:14:40 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.217 18:14:40 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:42.217 18:14:40 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:42.217 18:14:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:42.217 18:14:40 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:42.217 18:14:40 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:42.217 18:14:40 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:42.217 18:14:40 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.217 18:14:40 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:42.217 18:14:40 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:42.217 18:14:40 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:42.217 18:14:40 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.217 18:14:40 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:42.217 18:14:40 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:42.217 18:14:40 -- common/autotest_common.sh@1588 -- # return 0 00:05:42.217 18:14:40 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:42.217 18:14:40 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:42.217 18:14:40 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:42.217 18:14:40 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:42.217 18:14:40 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:42.217 18:14:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.217 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.217 18:14:40 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.217 18:14:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.217 18:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.217 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.217 ************************************ 00:05:42.217 START TEST env 00:05:42.217 ************************************ 00:05:42.217 18:14:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.476 * Looking for test storage... 00:05:42.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:42.476 18:14:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.476 18:14:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.476 18:14:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.476 18:14:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.476 18:14:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.476 18:14:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.476 18:14:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.476 18:14:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.476 18:14:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.476 18:14:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.476 18:14:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.476 18:14:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.476 18:14:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.476 18:14:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.476 18:14:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.476 18:14:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.476 18:14:40 -- scripts/common.sh@344 -- # : 1 00:05:42.476 18:14:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.476 18:14:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.476 18:14:40 -- scripts/common.sh@364 -- # decimal 1 00:05:42.476 18:14:40 -- scripts/common.sh@352 -- # local d=1 00:05:42.476 18:14:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.476 18:14:40 -- scripts/common.sh@354 -- # echo 1 00:05:42.476 18:14:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.476 18:14:40 -- scripts/common.sh@365 -- # decimal 2 00:05:42.476 18:14:40 -- scripts/common.sh@352 -- # local d=2 00:05:42.476 18:14:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.476 18:14:40 -- scripts/common.sh@354 -- # echo 2 00:05:42.476 18:14:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.476 18:14:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.476 18:14:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.476 18:14:40 -- scripts/common.sh@367 -- # return 0 00:05:42.476 18:14:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.476 18:14:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.476 --rc genhtml_branch_coverage=1 00:05:42.476 --rc genhtml_function_coverage=1 00:05:42.476 --rc genhtml_legend=1 00:05:42.476 --rc geninfo_all_blocks=1 00:05:42.476 --rc geninfo_unexecuted_blocks=1 00:05:42.476 00:05:42.476 ' 00:05:42.476 18:14:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.476 --rc genhtml_branch_coverage=1 00:05:42.476 --rc genhtml_function_coverage=1 00:05:42.476 --rc genhtml_legend=1 00:05:42.476 --rc geninfo_all_blocks=1 00:05:42.476 --rc geninfo_unexecuted_blocks=1 00:05:42.476 00:05:42.476 ' 00:05:42.477 18:14:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.477 --rc genhtml_branch_coverage=1 00:05:42.477 --rc genhtml_function_coverage=1 00:05:42.477 --rc genhtml_legend=1 00:05:42.477 --rc geninfo_all_blocks=1 00:05:42.477 --rc geninfo_unexecuted_blocks=1 00:05:42.477 00:05:42.477 ' 00:05:42.477 18:14:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.477 --rc genhtml_branch_coverage=1 00:05:42.477 --rc genhtml_function_coverage=1 00:05:42.477 --rc genhtml_legend=1 00:05:42.477 --rc geninfo_all_blocks=1 00:05:42.477 --rc geninfo_unexecuted_blocks=1 00:05:42.477 00:05:42.477 ' 00:05:42.477 18:14:40 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:42.477 18:14:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.477 18:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.477 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.477 ************************************ 00:05:42.477 START TEST env_memory 00:05:42.477 ************************************ 00:05:42.477 18:14:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:42.477 00:05:42.477 00:05:42.477 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.477 http://cunit.sourceforge.net/ 00:05:42.477 00:05:42.477 00:05:42.477 Suite: memory 00:05:42.477 Test: alloc and free memory map ...[2024-11-17 18:14:40.714416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:42.477 passed 00:05:42.736 Test: mem map translation ...[2024-11-17 18:14:40.745825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:42.736 [2024-11-17 18:14:40.745870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:42.736 [2024-11-17 18:14:40.745936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:42.736 [2024-11-17 18:14:40.745949] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:42.736 passed 00:05:42.736 Test: mem map registration ...[2024-11-17 18:14:40.810690] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:42.736 [2024-11-17 18:14:40.810747] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:42.736 passed 00:05:42.736 Test: mem map adjacent registrations ...passed 00:05:42.736 00:05:42.736 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.736 suites 1 1 n/a 0 0 00:05:42.736 tests 4 4 4 0 0 00:05:42.736 asserts 152 152 152 0 n/a 00:05:42.736 00:05:42.736 Elapsed time = 0.213 seconds 00:05:42.736 00:05:42.736 real 0m0.230s 00:05:42.736 user 0m0.208s 00:05:42.736 sys 0m0.015s 00:05:42.736 18:14:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.736 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.736 ************************************ 00:05:42.736 END TEST env_memory 00:05:42.736 ************************************ 00:05:42.736 18:14:40 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:42.736 18:14:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.736 18:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.736 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.736 ************************************ 00:05:42.736 START TEST env_vtophys 00:05:42.736 ************************************ 00:05:42.736 18:14:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:42.736 EAL: lib.eal log level changed from notice to debug 00:05:42.736 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 1 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 2 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 3 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 4 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 5 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 6 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 7 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 8 as core 0 on socket 0 00:05:42.736 EAL: Detected lcore 9 as core 0 on socket 0 00:05:42.736 EAL: Maximum logical cores by configuration: 128 00:05:42.736 EAL: Detected CPU lcores: 10 00:05:42.736 EAL: Detected NUMA nodes: 1 00:05:42.736 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:42.736 EAL: Detected shared linkage of DPDK 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:42.736 EAL: Registered [vdev] bus. 00:05:42.736 EAL: bus.vdev log level changed from disabled to notice 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:42.736 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:42.736 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:42.736 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:42.736 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.736 EAL: No shared files mode enabled, IPC is disabled 00:05:42.736 EAL: Selected IOVA mode 'PA' 00:05:42.736 EAL: Probing VFIO support... 00:05:42.736 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:42.736 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:42.736 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.736 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.736 EAL: Setting up physically contiguous memory... 00:05:42.736 EAL: Setting maximum number of open files to 524288 00:05:42.736 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.736 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.736 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.736 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.736 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.736 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.736 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.736 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.736 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.736 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.736 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.736 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.736 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.736 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.736 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.736 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.736 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.737 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.737 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.737 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.737 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.737 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.737 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.737 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.737 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.737 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.737 EAL: Hugepages will be freed exactly as allocated. 00:05:42.737 EAL: No shared files mode enabled, IPC is disabled 00:05:42.737 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: TSC frequency is ~2200000 KHz 00:05:42.997 EAL: Main lcore 0 is ready (tid=7f3f3786fa00;cpuset=[0]) 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 0 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.997 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:42.997 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.997 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:42.997 00:05:42.997 00:05:42.997 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.997 http://cunit.sourceforge.net/ 00:05:42.997 00:05:42.997 00:05:42.997 Suite: components_suite 00:05:42.997 Test: vtophys_malloc_test ...passed 00:05:42.997 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.997 EAL: Trying to obtain current memory policy. 00:05:42.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.997 EAL: Restoring previous memory policy: 4 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.997 EAL: request: mp_malloc_sync 00:05:42.997 EAL: No shared files mode enabled, IPC is disabled 00:05:42.997 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.257 EAL: request: mp_malloc_sync 00:05:43.257 EAL: No shared files mode enabled, IPC is disabled 00:05:43.257 EAL: Heap on socket 0 was shrunk by 258MB 00:05:43.257 EAL: Trying to obtain current memory policy. 00:05:43.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.257 EAL: Restoring previous memory policy: 4 00:05:43.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.257 EAL: request: mp_malloc_sync 00:05:43.257 EAL: No shared files mode enabled, IPC is disabled 00:05:43.257 EAL: Heap on socket 0 was expanded by 514MB 00:05:43.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.257 EAL: request: mp_malloc_sync 00:05:43.257 EAL: No shared files mode enabled, IPC is disabled 00:05:43.257 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.257 EAL: Trying to obtain current memory policy. 00:05:43.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.517 EAL: Restoring previous memory policy: 4 00:05:43.517 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.517 EAL: request: mp_malloc_sync 00:05:43.517 EAL: No shared files mode enabled, IPC is disabled 00:05:43.517 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.517 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.776 EAL: request: mp_malloc_sync 00:05:43.776 passed 00:05:43.776 00:05:43.776 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.776 suites 1 1 n/a 0 0 00:05:43.776 tests 2 2 2 0 0 00:05:43.776 asserts 5281 5281 5281 0 n/a 00:05:43.776 00:05:43.776 Elapsed time = 0.692 seconds 00:05:43.776 EAL: No shared files mode enabled, IPC is disabled 00:05:43.776 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.776 EAL: request: mp_malloc_sync 00:05:43.776 EAL: No shared files mode enabled, IPC is disabled 00:05:43.776 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.776 EAL: No shared files mode enabled, IPC is disabled 00:05:43.776 EAL: No shared files mode enabled, IPC is disabled 00:05:43.776 EAL: No shared files mode enabled, IPC is disabled 00:05:43.776 00:05:43.776 real 0m0.889s 00:05:43.776 user 0m0.450s 00:05:43.776 sys 0m0.304s 00:05:43.776 ************************************ 00:05:43.776 END TEST env_vtophys 00:05:43.776 ************************************ 00:05:43.776 18:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.776 18:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.776 18:14:41 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.776 18:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.776 18:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.776 18:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.776 ************************************ 00:05:43.776 START TEST env_pci 00:05:43.776 ************************************ 00:05:43.776 18:14:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.777 00:05:43.777 00:05:43.777 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.777 http://cunit.sourceforge.net/ 00:05:43.777 00:05:43.777 00:05:43.777 Suite: pci 00:05:43.777 Test: pci_hook ...[2024-11-17 18:14:41.906817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65448 has claimed it 00:05:43.777 passed 00:05:43.777 00:05:43.777 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.777 suites 1 1 n/a 0 0 00:05:43.777 tests 1 1 1 0 0 00:05:43.777 asserts 25 25 25 0 n/a 00:05:43.777 00:05:43.777 Elapsed time = 0.002 seconds 00:05:43.777 EAL: Cannot find device (10000:00:01.0) 00:05:43.777 EAL: Failed to attach device on primary process 00:05:43.777 ************************************ 00:05:43.777 END TEST env_pci 00:05:43.777 ************************************ 00:05:43.777 00:05:43.777 real 0m0.018s 00:05:43.777 user 0m0.007s 00:05:43.777 sys 0m0.011s 00:05:43.777 18:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.777 18:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 18:14:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.777 18:14:41 -- env/env.sh@15 -- # uname 00:05:43.777 18:14:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.777 18:14:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.777 18:14:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.777 18:14:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:43.777 18:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.777 18:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 ************************************ 00:05:43.777 START TEST env_dpdk_post_init 00:05:43.777 ************************************ 00:05:43.777 18:14:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.777 EAL: Detected CPU lcores: 10 00:05:43.777 EAL: Detected NUMA nodes: 1 00:05:43.777 EAL: Detected shared linkage of DPDK 00:05:43.777 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.777 EAL: Selected IOVA mode 'PA' 00:05:44.036 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.036 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:44.036 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:44.036 Starting DPDK initialization... 00:05:44.036 Starting SPDK post initialization... 00:05:44.036 SPDK NVMe probe 00:05:44.036 Attaching to 0000:00:06.0 00:05:44.036 Attaching to 0000:00:07.0 00:05:44.036 Attached to 0000:00:06.0 00:05:44.036 Attached to 0000:00:07.0 00:05:44.036 Cleaning up... 00:05:44.036 00:05:44.036 real 0m0.172s 00:05:44.036 user 0m0.036s 00:05:44.036 sys 0m0.035s 00:05:44.036 ************************************ 00:05:44.036 END TEST env_dpdk_post_init 00:05:44.036 ************************************ 00:05:44.036 18:14:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.036 18:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.036 18:14:42 -- env/env.sh@26 -- # uname 00:05:44.036 18:14:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.036 18:14:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.036 18:14:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.036 18:14:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.036 18:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.036 ************************************ 00:05:44.036 START TEST env_mem_callbacks 00:05:44.036 ************************************ 00:05:44.036 18:14:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.036 EAL: Detected CPU lcores: 10 00:05:44.036 EAL: Detected NUMA nodes: 1 00:05:44.036 EAL: Detected shared linkage of DPDK 00:05:44.036 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.036 EAL: Selected IOVA mode 'PA' 00:05:44.295 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.295 00:05:44.295 00:05:44.295 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.295 http://cunit.sourceforge.net/ 00:05:44.295 00:05:44.295 00:05:44.295 Suite: memory 00:05:44.295 Test: test ... 00:05:44.295 register 0x200000200000 2097152 00:05:44.295 malloc 3145728 00:05:44.295 register 0x200000400000 4194304 00:05:44.295 buf 0x200000500000 len 3145728 PASSED 00:05:44.295 malloc 64 00:05:44.295 buf 0x2000004fff40 len 64 PASSED 00:05:44.295 malloc 4194304 00:05:44.295 register 0x200000800000 6291456 00:05:44.295 buf 0x200000a00000 len 4194304 PASSED 00:05:44.295 free 0x200000500000 3145728 00:05:44.295 free 0x2000004fff40 64 00:05:44.295 unregister 0x200000400000 4194304 PASSED 00:05:44.295 free 0x200000a00000 4194304 00:05:44.295 unregister 0x200000800000 6291456 PASSED 00:05:44.295 malloc 8388608 00:05:44.295 register 0x200000400000 10485760 00:05:44.295 buf 0x200000600000 len 8388608 PASSED 00:05:44.295 free 0x200000600000 8388608 00:05:44.295 unregister 0x200000400000 10485760 PASSED 00:05:44.295 passed 00:05:44.295 00:05:44.295 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.295 suites 1 1 n/a 0 0 00:05:44.295 tests 1 1 1 0 0 00:05:44.295 asserts 15 15 15 0 n/a 00:05:44.295 00:05:44.295 Elapsed time = 0.008 seconds 00:05:44.295 00:05:44.295 real 0m0.141s 00:05:44.295 user 0m0.014s 00:05:44.295 sys 0m0.025s 00:05:44.295 18:14:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.295 18:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.295 ************************************ 00:05:44.295 END TEST env_mem_callbacks 00:05:44.295 ************************************ 00:05:44.295 00:05:44.295 real 0m1.903s 00:05:44.295 user 0m0.918s 00:05:44.295 sys 0m0.624s 00:05:44.295 18:14:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.295 18:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.295 ************************************ 00:05:44.295 END TEST env 00:05:44.295 ************************************ 00:05:44.295 18:14:42 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.295 18:14:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.295 18:14:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.295 18:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.295 ************************************ 00:05:44.295 START TEST rpc 00:05:44.295 ************************************ 00:05:44.295 18:14:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.295 * Looking for test storage... 00:05:44.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.295 18:14:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.295 18:14:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.295 18:14:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.555 18:14:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.555 18:14:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.555 18:14:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.555 18:14:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.555 18:14:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.555 18:14:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.555 18:14:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.555 18:14:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.555 18:14:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.555 18:14:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.555 18:14:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.555 18:14:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.555 18:14:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.555 18:14:42 -- scripts/common.sh@344 -- # : 1 00:05:44.555 18:14:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.555 18:14:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.555 18:14:42 -- scripts/common.sh@364 -- # decimal 1 00:05:44.555 18:14:42 -- scripts/common.sh@352 -- # local d=1 00:05:44.555 18:14:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.555 18:14:42 -- scripts/common.sh@354 -- # echo 1 00:05:44.555 18:14:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.555 18:14:42 -- scripts/common.sh@365 -- # decimal 2 00:05:44.555 18:14:42 -- scripts/common.sh@352 -- # local d=2 00:05:44.555 18:14:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.555 18:14:42 -- scripts/common.sh@354 -- # echo 2 00:05:44.555 18:14:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.555 18:14:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.555 18:14:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.555 18:14:42 -- scripts/common.sh@367 -- # return 0 00:05:44.555 18:14:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.555 18:14:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.555 --rc genhtml_branch_coverage=1 00:05:44.555 --rc genhtml_function_coverage=1 00:05:44.555 --rc genhtml_legend=1 00:05:44.555 --rc geninfo_all_blocks=1 00:05:44.555 --rc geninfo_unexecuted_blocks=1 00:05:44.555 00:05:44.555 ' 00:05:44.555 18:14:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.555 --rc genhtml_branch_coverage=1 00:05:44.555 --rc genhtml_function_coverage=1 00:05:44.555 --rc genhtml_legend=1 00:05:44.555 --rc geninfo_all_blocks=1 00:05:44.555 --rc geninfo_unexecuted_blocks=1 00:05:44.555 00:05:44.555 ' 00:05:44.555 18:14:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.555 --rc genhtml_branch_coverage=1 00:05:44.555 --rc genhtml_function_coverage=1 00:05:44.555 --rc genhtml_legend=1 00:05:44.555 --rc geninfo_all_blocks=1 00:05:44.555 --rc geninfo_unexecuted_blocks=1 00:05:44.555 00:05:44.555 ' 00:05:44.555 18:14:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.555 --rc genhtml_branch_coverage=1 00:05:44.555 --rc genhtml_function_coverage=1 00:05:44.555 --rc genhtml_legend=1 00:05:44.555 --rc geninfo_all_blocks=1 00:05:44.555 --rc geninfo_unexecuted_blocks=1 00:05:44.555 00:05:44.555 ' 00:05:44.555 18:14:42 -- rpc/rpc.sh@65 -- # spdk_pid=65564 00:05:44.555 18:14:42 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.555 18:14:42 -- rpc/rpc.sh@67 -- # waitforlisten 65564 00:05:44.555 18:14:42 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.555 18:14:42 -- common/autotest_common.sh@829 -- # '[' -z 65564 ']' 00:05:44.555 18:14:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.555 18:14:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.555 18:14:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.555 18:14:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.555 18:14:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.555 [2024-11-17 18:14:42.684058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:44.555 [2024-11-17 18:14:42.684178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65564 ] 00:05:44.555 [2024-11-17 18:14:42.818731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.815 [2024-11-17 18:14:42.859168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.815 [2024-11-17 18:14:42.859381] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.815 [2024-11-17 18:14:42.859401] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65564' to capture a snapshot of events at runtime. 00:05:44.815 [2024-11-17 18:14:42.859412] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65564 for offline analysis/debug. 00:05:44.815 [2024-11-17 18:14:42.859455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.753 18:14:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.753 18:14:43 -- common/autotest_common.sh@862 -- # return 0 00:05:45.753 18:14:43 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.753 18:14:43 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.753 18:14:43 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.753 18:14:43 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.753 18:14:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.753 18:14:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.753 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 ************************************ 00:05:45.753 START TEST rpc_integrity 00:05:45.753 ************************************ 00:05:45.753 18:14:43 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:45.753 18:14:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.753 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.753 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.753 18:14:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.753 18:14:43 -- rpc/rpc.sh@13 -- # jq length 00:05:45.753 18:14:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.753 18:14:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.753 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.753 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.753 18:14:43 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.753 18:14:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.753 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.753 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.753 18:14:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.753 { 00:05:45.753 "name": "Malloc0", 00:05:45.753 "aliases": [ 00:05:45.753 "904482fc-70f4-4006-8d0c-949821ac9e3c" 00:05:45.753 ], 00:05:45.753 "product_name": "Malloc disk", 00:05:45.753 "block_size": 512, 00:05:45.753 "num_blocks": 16384, 00:05:45.753 "uuid": "904482fc-70f4-4006-8d0c-949821ac9e3c", 00:05:45.753 "assigned_rate_limits": { 00:05:45.753 "rw_ios_per_sec": 0, 00:05:45.753 "rw_mbytes_per_sec": 0, 00:05:45.753 "r_mbytes_per_sec": 0, 00:05:45.753 "w_mbytes_per_sec": 0 00:05:45.753 }, 00:05:45.753 "claimed": false, 00:05:45.753 "zoned": false, 00:05:45.753 "supported_io_types": { 00:05:45.753 "read": true, 00:05:45.753 "write": true, 00:05:45.753 "unmap": true, 00:05:45.753 "write_zeroes": true, 00:05:45.753 "flush": true, 00:05:45.753 "reset": true, 00:05:45.753 "compare": false, 00:05:45.753 "compare_and_write": false, 00:05:45.753 "abort": true, 00:05:45.753 "nvme_admin": false, 00:05:45.753 "nvme_io": false 00:05:45.753 }, 00:05:45.753 "memory_domains": [ 00:05:45.753 { 00:05:45.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.753 "dma_device_type": 2 00:05:45.753 } 00:05:45.753 ], 00:05:45.753 "driver_specific": {} 00:05:45.753 } 00:05:45.753 ]' 00:05:45.753 18:14:43 -- rpc/rpc.sh@17 -- # jq length 00:05:45.753 18:14:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.753 18:14:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.753 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.753 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 [2024-11-17 18:14:43.889398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.753 [2024-11-17 18:14:43.889461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.753 [2024-11-17 18:14:43.889478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf5d030 00:05:45.753 [2024-11-17 18:14:43.889486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.753 [2024-11-17 18:14:43.891010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.753 [2024-11-17 18:14:43.891058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.753 Passthru0 00:05:45.753 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.753 18:14:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.753 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.753 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.753 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.753 18:14:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.753 { 00:05:45.753 "name": "Malloc0", 00:05:45.753 "aliases": [ 00:05:45.753 "904482fc-70f4-4006-8d0c-949821ac9e3c" 00:05:45.753 ], 00:05:45.753 "product_name": "Malloc disk", 00:05:45.753 "block_size": 512, 00:05:45.753 "num_blocks": 16384, 00:05:45.753 "uuid": "904482fc-70f4-4006-8d0c-949821ac9e3c", 00:05:45.753 "assigned_rate_limits": { 00:05:45.753 "rw_ios_per_sec": 0, 00:05:45.753 "rw_mbytes_per_sec": 0, 00:05:45.753 "r_mbytes_per_sec": 0, 00:05:45.753 "w_mbytes_per_sec": 0 00:05:45.753 }, 00:05:45.753 "claimed": true, 00:05:45.753 "claim_type": "exclusive_write", 00:05:45.753 "zoned": false, 00:05:45.753 "supported_io_types": { 00:05:45.753 "read": true, 00:05:45.753 "write": true, 00:05:45.753 "unmap": true, 00:05:45.754 "write_zeroes": true, 00:05:45.754 "flush": true, 00:05:45.754 "reset": true, 00:05:45.754 "compare": false, 00:05:45.754 "compare_and_write": false, 00:05:45.754 "abort": true, 00:05:45.754 "nvme_admin": false, 00:05:45.754 "nvme_io": false 00:05:45.754 }, 00:05:45.754 "memory_domains": [ 00:05:45.754 { 00:05:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.754 "dma_device_type": 2 00:05:45.754 } 00:05:45.754 ], 00:05:45.754 "driver_specific": {} 00:05:45.754 }, 00:05:45.754 { 00:05:45.754 "name": "Passthru0", 00:05:45.754 "aliases": [ 00:05:45.754 "735cc619-f24b-5d9f-b3ce-fed8afadafef" 00:05:45.754 ], 00:05:45.754 "product_name": "passthru", 00:05:45.754 "block_size": 512, 00:05:45.754 "num_blocks": 16384, 00:05:45.754 "uuid": "735cc619-f24b-5d9f-b3ce-fed8afadafef", 00:05:45.754 "assigned_rate_limits": { 00:05:45.754 "rw_ios_per_sec": 0, 00:05:45.754 "rw_mbytes_per_sec": 0, 00:05:45.754 "r_mbytes_per_sec": 0, 00:05:45.754 "w_mbytes_per_sec": 0 00:05:45.754 }, 00:05:45.754 "claimed": false, 00:05:45.754 "zoned": false, 00:05:45.754 "supported_io_types": { 00:05:45.754 "read": true, 00:05:45.754 "write": true, 00:05:45.754 "unmap": true, 00:05:45.754 "write_zeroes": true, 00:05:45.754 "flush": true, 00:05:45.754 "reset": true, 00:05:45.754 "compare": false, 00:05:45.754 "compare_and_write": false, 00:05:45.754 "abort": true, 00:05:45.754 "nvme_admin": false, 00:05:45.754 "nvme_io": false 00:05:45.754 }, 00:05:45.754 "memory_domains": [ 00:05:45.754 { 00:05:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.754 "dma_device_type": 2 00:05:45.754 } 00:05:45.754 ], 00:05:45.754 "driver_specific": { 00:05:45.754 "passthru": { 00:05:45.754 "name": "Passthru0", 00:05:45.754 "base_bdev_name": "Malloc0" 00:05:45.754 } 00:05:45.754 } 00:05:45.754 } 00:05:45.754 ]' 00:05:45.754 18:14:43 -- rpc/rpc.sh@21 -- # jq length 00:05:45.754 18:14:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.754 18:14:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.754 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.754 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.754 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.754 18:14:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.754 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.754 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.754 18:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.754 18:14:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.754 18:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.754 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.754 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.754 18:14:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.754 18:14:44 -- rpc/rpc.sh@26 -- # jq length 00:05:46.013 ************************************ 00:05:46.013 END TEST rpc_integrity 00:05:46.013 ************************************ 00:05:46.013 18:14:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.013 00:05:46.013 real 0m0.312s 00:05:46.013 user 0m0.209s 00:05:46.013 sys 0m0.038s 00:05:46.013 18:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.013 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.013 18:14:44 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.013 18:14:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.013 18:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.013 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.013 ************************************ 00:05:46.013 START TEST rpc_plugins 00:05:46.013 ************************************ 00:05:46.013 18:14:44 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:46.013 18:14:44 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.013 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.013 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.013 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.013 18:14:44 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.013 18:14:44 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.013 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.013 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.013 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.013 18:14:44 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.013 { 00:05:46.013 "name": "Malloc1", 00:05:46.013 "aliases": [ 00:05:46.013 "c1ffd2e6-3c3e-4ada-90a8-1ed2ede59afe" 00:05:46.013 ], 00:05:46.013 "product_name": "Malloc disk", 00:05:46.013 "block_size": 4096, 00:05:46.013 "num_blocks": 256, 00:05:46.013 "uuid": "c1ffd2e6-3c3e-4ada-90a8-1ed2ede59afe", 00:05:46.013 "assigned_rate_limits": { 00:05:46.013 "rw_ios_per_sec": 0, 00:05:46.013 "rw_mbytes_per_sec": 0, 00:05:46.013 "r_mbytes_per_sec": 0, 00:05:46.013 "w_mbytes_per_sec": 0 00:05:46.013 }, 00:05:46.013 "claimed": false, 00:05:46.013 "zoned": false, 00:05:46.013 "supported_io_types": { 00:05:46.013 "read": true, 00:05:46.013 "write": true, 00:05:46.013 "unmap": true, 00:05:46.013 "write_zeroes": true, 00:05:46.013 "flush": true, 00:05:46.013 "reset": true, 00:05:46.013 "compare": false, 00:05:46.013 "compare_and_write": false, 00:05:46.013 "abort": true, 00:05:46.013 "nvme_admin": false, 00:05:46.013 "nvme_io": false 00:05:46.013 }, 00:05:46.013 "memory_domains": [ 00:05:46.013 { 00:05:46.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.014 "dma_device_type": 2 00:05:46.014 } 00:05:46.014 ], 00:05:46.014 "driver_specific": {} 00:05:46.014 } 00:05:46.014 ]' 00:05:46.014 18:14:44 -- rpc/rpc.sh@32 -- # jq length 00:05:46.014 18:14:44 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.014 18:14:44 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.014 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.014 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.014 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.014 18:14:44 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.014 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.014 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.014 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.014 18:14:44 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.014 18:14:44 -- rpc/rpc.sh@36 -- # jq length 00:05:46.014 ************************************ 00:05:46.014 END TEST rpc_plugins 00:05:46.014 ************************************ 00:05:46.014 18:14:44 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.014 00:05:46.014 real 0m0.149s 00:05:46.014 user 0m0.097s 00:05:46.014 sys 0m0.018s 00:05:46.014 18:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.014 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.273 18:14:44 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.273 18:14:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.273 18:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.273 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.273 ************************************ 00:05:46.273 START TEST rpc_trace_cmd_test 00:05:46.273 ************************************ 00:05:46.273 18:14:44 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:46.273 18:14:44 -- rpc/rpc.sh@40 -- # local info 00:05:46.273 18:14:44 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.273 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.273 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.273 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.273 18:14:44 -- rpc/rpc.sh@42 -- # info='{ 00:05:46.273 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65564", 00:05:46.273 "tpoint_group_mask": "0x8", 00:05:46.273 "iscsi_conn": { 00:05:46.273 "mask": "0x2", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "scsi": { 00:05:46.273 "mask": "0x4", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "bdev": { 00:05:46.273 "mask": "0x8", 00:05:46.273 "tpoint_mask": "0xffffffffffffffff" 00:05:46.273 }, 00:05:46.273 "nvmf_rdma": { 00:05:46.273 "mask": "0x10", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "nvmf_tcp": { 00:05:46.273 "mask": "0x20", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "ftl": { 00:05:46.273 "mask": "0x40", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "blobfs": { 00:05:46.273 "mask": "0x80", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "dsa": { 00:05:46.273 "mask": "0x200", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "thread": { 00:05:46.273 "mask": "0x400", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "nvme_pcie": { 00:05:46.273 "mask": "0x800", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "iaa": { 00:05:46.273 "mask": "0x1000", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "nvme_tcp": { 00:05:46.273 "mask": "0x2000", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 }, 00:05:46.273 "bdev_nvme": { 00:05:46.273 "mask": "0x4000", 00:05:46.273 "tpoint_mask": "0x0" 00:05:46.273 } 00:05:46.273 }' 00:05:46.273 18:14:44 -- rpc/rpc.sh@43 -- # jq length 00:05:46.273 18:14:44 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:46.273 18:14:44 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.273 18:14:44 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.273 18:14:44 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.273 18:14:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.273 18:14:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.534 18:14:44 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.534 18:14:44 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.534 ************************************ 00:05:46.534 END TEST rpc_trace_cmd_test 00:05:46.534 ************************************ 00:05:46.534 18:14:44 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.534 00:05:46.534 real 0m0.284s 00:05:46.534 user 0m0.248s 00:05:46.534 sys 0m0.024s 00:05:46.534 18:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 18:14:44 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.534 18:14:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.534 18:14:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.534 18:14:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.534 18:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 ************************************ 00:05:46.534 START TEST rpc_daemon_integrity 00:05:46.534 ************************************ 00:05:46.534 18:14:44 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:46.534 18:14:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.534 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 18:14:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.534 18:14:44 -- rpc/rpc.sh@13 -- # jq length 00:05:46.534 18:14:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.534 18:14:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.534 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 18:14:44 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.534 18:14:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.534 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 18:14:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.534 { 00:05:46.534 "name": "Malloc2", 00:05:46.534 "aliases": [ 00:05:46.534 "6ce2c111-0d29-4491-8c9a-bc85e9fb3c2c" 00:05:46.534 ], 00:05:46.534 "product_name": "Malloc disk", 00:05:46.534 "block_size": 512, 00:05:46.534 "num_blocks": 16384, 00:05:46.534 "uuid": "6ce2c111-0d29-4491-8c9a-bc85e9fb3c2c", 00:05:46.534 "assigned_rate_limits": { 00:05:46.534 "rw_ios_per_sec": 0, 00:05:46.534 "rw_mbytes_per_sec": 0, 00:05:46.534 "r_mbytes_per_sec": 0, 00:05:46.534 "w_mbytes_per_sec": 0 00:05:46.534 }, 00:05:46.534 "claimed": false, 00:05:46.534 "zoned": false, 00:05:46.534 "supported_io_types": { 00:05:46.534 "read": true, 00:05:46.534 "write": true, 00:05:46.534 "unmap": true, 00:05:46.534 "write_zeroes": true, 00:05:46.534 "flush": true, 00:05:46.534 "reset": true, 00:05:46.534 "compare": false, 00:05:46.534 "compare_and_write": false, 00:05:46.534 "abort": true, 00:05:46.534 "nvme_admin": false, 00:05:46.534 "nvme_io": false 00:05:46.534 }, 00:05:46.534 "memory_domains": [ 00:05:46.534 { 00:05:46.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.534 "dma_device_type": 2 00:05:46.534 } 00:05:46.534 ], 00:05:46.534 "driver_specific": {} 00:05:46.534 } 00:05:46.534 ]' 00:05:46.534 18:14:44 -- rpc/rpc.sh@17 -- # jq length 00:05:46.534 18:14:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.534 18:14:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.534 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.534 [2024-11-17 18:14:44.789742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.534 [2024-11-17 18:14:44.789967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.534 [2024-11-17 18:14:44.790008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10fbfe0 00:05:46.534 [2024-11-17 18:14:44.790018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.534 [2024-11-17 18:14:44.791410] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.534 [2024-11-17 18:14:44.791444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.534 Passthru0 00:05:46.534 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.534 18:14:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.534 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.534 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.794 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.794 18:14:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.794 { 00:05:46.794 "name": "Malloc2", 00:05:46.794 "aliases": [ 00:05:46.794 "6ce2c111-0d29-4491-8c9a-bc85e9fb3c2c" 00:05:46.794 ], 00:05:46.794 "product_name": "Malloc disk", 00:05:46.794 "block_size": 512, 00:05:46.794 "num_blocks": 16384, 00:05:46.794 "uuid": "6ce2c111-0d29-4491-8c9a-bc85e9fb3c2c", 00:05:46.794 "assigned_rate_limits": { 00:05:46.794 "rw_ios_per_sec": 0, 00:05:46.794 "rw_mbytes_per_sec": 0, 00:05:46.794 "r_mbytes_per_sec": 0, 00:05:46.794 "w_mbytes_per_sec": 0 00:05:46.794 }, 00:05:46.794 "claimed": true, 00:05:46.794 "claim_type": "exclusive_write", 00:05:46.794 "zoned": false, 00:05:46.794 "supported_io_types": { 00:05:46.794 "read": true, 00:05:46.794 "write": true, 00:05:46.794 "unmap": true, 00:05:46.794 "write_zeroes": true, 00:05:46.794 "flush": true, 00:05:46.794 "reset": true, 00:05:46.794 "compare": false, 00:05:46.794 "compare_and_write": false, 00:05:46.794 "abort": true, 00:05:46.794 "nvme_admin": false, 00:05:46.794 "nvme_io": false 00:05:46.794 }, 00:05:46.794 "memory_domains": [ 00:05:46.794 { 00:05:46.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.794 "dma_device_type": 2 00:05:46.794 } 00:05:46.794 ], 00:05:46.794 "driver_specific": {} 00:05:46.795 }, 00:05:46.795 { 00:05:46.795 "name": "Passthru0", 00:05:46.795 "aliases": [ 00:05:46.795 "5f0942d5-9bf4-5077-94fc-1ea6b60c3081" 00:05:46.795 ], 00:05:46.795 "product_name": "passthru", 00:05:46.795 "block_size": 512, 00:05:46.795 "num_blocks": 16384, 00:05:46.795 "uuid": "5f0942d5-9bf4-5077-94fc-1ea6b60c3081", 00:05:46.795 "assigned_rate_limits": { 00:05:46.795 "rw_ios_per_sec": 0, 00:05:46.795 "rw_mbytes_per_sec": 0, 00:05:46.795 "r_mbytes_per_sec": 0, 00:05:46.795 "w_mbytes_per_sec": 0 00:05:46.795 }, 00:05:46.795 "claimed": false, 00:05:46.795 "zoned": false, 00:05:46.795 "supported_io_types": { 00:05:46.795 "read": true, 00:05:46.795 "write": true, 00:05:46.795 "unmap": true, 00:05:46.795 "write_zeroes": true, 00:05:46.795 "flush": true, 00:05:46.795 "reset": true, 00:05:46.795 "compare": false, 00:05:46.795 "compare_and_write": false, 00:05:46.795 "abort": true, 00:05:46.795 "nvme_admin": false, 00:05:46.795 "nvme_io": false 00:05:46.795 }, 00:05:46.795 "memory_domains": [ 00:05:46.795 { 00:05:46.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.795 "dma_device_type": 2 00:05:46.795 } 00:05:46.795 ], 00:05:46.795 "driver_specific": { 00:05:46.795 "passthru": { 00:05:46.795 "name": "Passthru0", 00:05:46.795 "base_bdev_name": "Malloc2" 00:05:46.795 } 00:05:46.795 } 00:05:46.795 } 00:05:46.795 ]' 00:05:46.795 18:14:44 -- rpc/rpc.sh@21 -- # jq length 00:05:46.795 18:14:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.795 18:14:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.795 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.795 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.795 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.795 18:14:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.795 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.795 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.795 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.795 18:14:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.795 18:14:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.795 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.795 18:14:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.795 18:14:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.795 18:14:44 -- rpc/rpc.sh@26 -- # jq length 00:05:46.795 ************************************ 00:05:46.795 END TEST rpc_daemon_integrity 00:05:46.795 ************************************ 00:05:46.795 18:14:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.795 00:05:46.795 real 0m0.319s 00:05:46.795 user 0m0.223s 00:05:46.795 sys 0m0.032s 00:05:46.795 18:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.795 18:14:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.795 18:14:45 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.795 18:14:45 -- rpc/rpc.sh@84 -- # killprocess 65564 00:05:46.795 18:14:45 -- common/autotest_common.sh@936 -- # '[' -z 65564 ']' 00:05:46.795 18:14:45 -- common/autotest_common.sh@940 -- # kill -0 65564 00:05:46.795 18:14:45 -- common/autotest_common.sh@941 -- # uname 00:05:46.795 18:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.795 18:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65564 00:05:46.795 killing process with pid 65564 00:05:46.795 18:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.795 18:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.795 18:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65564' 00:05:46.795 18:14:45 -- common/autotest_common.sh@955 -- # kill 65564 00:05:46.795 18:14:45 -- common/autotest_common.sh@960 -- # wait 65564 00:05:47.054 00:05:47.054 real 0m2.854s 00:05:47.054 user 0m3.855s 00:05:47.054 sys 0m0.588s 00:05:47.054 ************************************ 00:05:47.054 END TEST rpc 00:05:47.054 ************************************ 00:05:47.054 18:14:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.054 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.314 18:14:45 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:47.314 18:14:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.314 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.314 ************************************ 00:05:47.314 START TEST rpc_client 00:05:47.314 ************************************ 00:05:47.314 18:14:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:47.314 * Looking for test storage... 00:05:47.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:47.314 18:14:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.314 18:14:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.314 18:14:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.314 18:14:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.314 18:14:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.314 18:14:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.314 18:14:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.314 18:14:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.314 18:14:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.314 18:14:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.314 18:14:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.314 18:14:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.314 18:14:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.314 18:14:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.314 18:14:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.314 18:14:45 -- scripts/common.sh@344 -- # : 1 00:05:47.314 18:14:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.314 18:14:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.314 18:14:45 -- scripts/common.sh@364 -- # decimal 1 00:05:47.314 18:14:45 -- scripts/common.sh@352 -- # local d=1 00:05:47.314 18:14:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.314 18:14:45 -- scripts/common.sh@354 -- # echo 1 00:05:47.314 18:14:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.314 18:14:45 -- scripts/common.sh@365 -- # decimal 2 00:05:47.314 18:14:45 -- scripts/common.sh@352 -- # local d=2 00:05:47.314 18:14:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.314 18:14:45 -- scripts/common.sh@354 -- # echo 2 00:05:47.314 18:14:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.314 18:14:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.314 18:14:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.314 18:14:45 -- scripts/common.sh@367 -- # return 0 00:05:47.314 18:14:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.314 --rc genhtml_branch_coverage=1 00:05:47.314 --rc genhtml_function_coverage=1 00:05:47.314 --rc genhtml_legend=1 00:05:47.314 --rc geninfo_all_blocks=1 00:05:47.314 --rc geninfo_unexecuted_blocks=1 00:05:47.314 00:05:47.314 ' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.314 --rc genhtml_branch_coverage=1 00:05:47.314 --rc genhtml_function_coverage=1 00:05:47.314 --rc genhtml_legend=1 00:05:47.314 --rc geninfo_all_blocks=1 00:05:47.314 --rc geninfo_unexecuted_blocks=1 00:05:47.314 00:05:47.314 ' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.314 --rc genhtml_branch_coverage=1 00:05:47.314 --rc genhtml_function_coverage=1 00:05:47.314 --rc genhtml_legend=1 00:05:47.314 --rc geninfo_all_blocks=1 00:05:47.314 --rc geninfo_unexecuted_blocks=1 00:05:47.314 00:05:47.314 ' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.314 --rc genhtml_branch_coverage=1 00:05:47.314 --rc genhtml_function_coverage=1 00:05:47.314 --rc genhtml_legend=1 00:05:47.314 --rc geninfo_all_blocks=1 00:05:47.314 --rc geninfo_unexecuted_blocks=1 00:05:47.314 00:05:47.314 ' 00:05:47.314 18:14:45 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:47.314 OK 00:05:47.314 18:14:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.314 00:05:47.314 real 0m0.205s 00:05:47.314 user 0m0.129s 00:05:47.314 sys 0m0.087s 00:05:47.314 18:14:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.314 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.314 ************************************ 00:05:47.314 END TEST rpc_client 00:05:47.314 ************************************ 00:05:47.314 18:14:45 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:47.314 18:14:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.314 18:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.314 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.574 ************************************ 00:05:47.574 START TEST json_config 00:05:47.574 ************************************ 00:05:47.574 18:14:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:47.574 18:14:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.574 18:14:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.574 18:14:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.574 18:14:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.574 18:14:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.574 18:14:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.574 18:14:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.574 18:14:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.574 18:14:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.574 18:14:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.574 18:14:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.574 18:14:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.574 18:14:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.574 18:14:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.574 18:14:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.574 18:14:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.574 18:14:45 -- scripts/common.sh@344 -- # : 1 00:05:47.574 18:14:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.574 18:14:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.574 18:14:45 -- scripts/common.sh@364 -- # decimal 1 00:05:47.574 18:14:45 -- scripts/common.sh@352 -- # local d=1 00:05:47.574 18:14:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.574 18:14:45 -- scripts/common.sh@354 -- # echo 1 00:05:47.574 18:14:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.574 18:14:45 -- scripts/common.sh@365 -- # decimal 2 00:05:47.574 18:14:45 -- scripts/common.sh@352 -- # local d=2 00:05:47.574 18:14:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.574 18:14:45 -- scripts/common.sh@354 -- # echo 2 00:05:47.574 18:14:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.574 18:14:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.574 18:14:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.574 18:14:45 -- scripts/common.sh@367 -- # return 0 00:05:47.574 18:14:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.574 18:14:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.574 --rc genhtml_branch_coverage=1 00:05:47.574 --rc genhtml_function_coverage=1 00:05:47.574 --rc genhtml_legend=1 00:05:47.574 --rc geninfo_all_blocks=1 00:05:47.574 --rc geninfo_unexecuted_blocks=1 00:05:47.574 00:05:47.574 ' 00:05:47.574 18:14:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.574 --rc genhtml_branch_coverage=1 00:05:47.574 --rc genhtml_function_coverage=1 00:05:47.574 --rc genhtml_legend=1 00:05:47.574 --rc geninfo_all_blocks=1 00:05:47.574 --rc geninfo_unexecuted_blocks=1 00:05:47.574 00:05:47.574 ' 00:05:47.574 18:14:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.574 --rc genhtml_branch_coverage=1 00:05:47.574 --rc genhtml_function_coverage=1 00:05:47.574 --rc genhtml_legend=1 00:05:47.574 --rc geninfo_all_blocks=1 00:05:47.574 --rc geninfo_unexecuted_blocks=1 00:05:47.574 00:05:47.574 ' 00:05:47.574 18:14:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.574 --rc genhtml_branch_coverage=1 00:05:47.574 --rc genhtml_function_coverage=1 00:05:47.574 --rc genhtml_legend=1 00:05:47.574 --rc geninfo_all_blocks=1 00:05:47.574 --rc geninfo_unexecuted_blocks=1 00:05:47.574 00:05:47.575 ' 00:05:47.575 18:14:45 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:47.575 18:14:45 -- nvmf/common.sh@7 -- # uname -s 00:05:47.575 18:14:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.575 18:14:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.575 18:14:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.575 18:14:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.575 18:14:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.575 18:14:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.575 18:14:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.575 18:14:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.575 18:14:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.575 18:14:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.575 18:14:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:05:47.575 18:14:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:05:47.575 18:14:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.575 18:14:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.575 18:14:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.575 18:14:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.575 18:14:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.575 18:14:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.575 18:14:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.575 18:14:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.575 18:14:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.575 18:14:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.575 18:14:45 -- paths/export.sh@5 -- # export PATH 00:05:47.575 18:14:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.575 18:14:45 -- nvmf/common.sh@46 -- # : 0 00:05:47.575 18:14:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:47.575 18:14:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:47.575 18:14:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:47.575 18:14:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.575 18:14:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.575 18:14:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:47.575 18:14:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:47.575 18:14:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:47.575 18:14:45 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.575 18:14:45 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.575 18:14:45 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:47.575 18:14:45 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.575 18:14:45 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:47.575 18:14:45 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.575 18:14:45 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:47.575 18:14:45 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:47.575 18:14:45 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:47.575 18:14:45 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:47.575 18:14:45 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.575 INFO: JSON configuration test init 00:05:47.575 18:14:45 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:47.575 18:14:45 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:47.575 18:14:45 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:47.575 18:14:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.575 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.575 18:14:45 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:47.575 18:14:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.575 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.575 18:14:45 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.575 18:14:45 -- json_config/json_config.sh@98 -- # local app=target 00:05:47.575 18:14:45 -- json_config/json_config.sh@99 -- # shift 00:05:47.575 18:14:45 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:47.575 18:14:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.575 18:14:45 -- json_config/json_config.sh@111 -- # app_pid[$app]=65817 00:05:47.575 18:14:45 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:47.575 Waiting for target to run... 00:05:47.575 18:14:45 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:47.575 18:14:45 -- json_config/json_config.sh@114 -- # waitforlisten 65817 /var/tmp/spdk_tgt.sock 00:05:47.575 18:14:45 -- common/autotest_common.sh@829 -- # '[' -z 65817 ']' 00:05:47.575 18:14:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.575 18:14:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.575 18:14:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.575 18:14:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.575 18:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.834 [2024-11-17 18:14:45.841562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:47.834 [2024-11-17 18:14:45.841831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65817 ] 00:05:48.093 [2024-11-17 18:14:46.132659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.093 [2024-11-17 18:14:46.154494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.093 [2024-11-17 18:14:46.154706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.661 00:05:48.661 18:14:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.661 18:14:46 -- common/autotest_common.sh@862 -- # return 0 00:05:48.661 18:14:46 -- json_config/json_config.sh@115 -- # echo '' 00:05:48.661 18:14:46 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:48.661 18:14:46 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:48.661 18:14:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.661 18:14:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 18:14:46 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:48.920 18:14:46 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:48.920 18:14:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.920 18:14:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 18:14:46 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:48.920 18:14:46 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:48.920 18:14:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:49.179 18:14:47 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:49.179 18:14:47 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:49.179 18:14:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.179 18:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.179 18:14:47 -- json_config/json_config.sh@48 -- # local ret=0 00:05:49.179 18:14:47 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:49.179 18:14:47 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:49.179 18:14:47 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:49.179 18:14:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:49.179 18:14:47 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:49.746 18:14:47 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:49.746 18:14:47 -- json_config/json_config.sh@51 -- # local get_types 00:05:49.746 18:14:47 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:49.746 18:14:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.746 18:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.746 18:14:47 -- json_config/json_config.sh@58 -- # return 0 00:05:49.746 18:14:47 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:49.746 18:14:47 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:49.746 18:14:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.746 18:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.746 18:14:47 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:49.746 18:14:47 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:49.746 18:14:47 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.746 18:14:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.005 MallocForNvmf0 00:05:50.005 18:14:48 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.005 18:14:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.005 MallocForNvmf1 00:05:50.264 18:14:48 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.264 18:14:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.264 [2024-11-17 18:14:48.521359] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.523 18:14:48 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.523 18:14:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.782 18:14:48 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.782 18:14:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.042 18:14:49 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.042 18:14:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.307 18:14:49 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.307 18:14:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.307 [2024-11-17 18:14:49.530003] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:51.307 18:14:49 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:51.307 18:14:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.307 18:14:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.566 18:14:49 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:51.566 18:14:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.566 18:14:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.566 18:14:49 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:51.566 18:14:49 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.566 18:14:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.825 MallocBdevForConfigChangeCheck 00:05:51.825 18:14:49 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:51.825 18:14:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.825 18:14:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.825 18:14:49 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:51.825 18:14:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.084 INFO: shutting down applications... 00:05:52.084 18:14:50 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:52.084 18:14:50 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:52.084 18:14:50 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:52.084 18:14:50 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:52.084 18:14:50 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:52.344 Calling clear_iscsi_subsystem 00:05:52.344 Calling clear_nvmf_subsystem 00:05:52.344 Calling clear_nbd_subsystem 00:05:52.344 Calling clear_ublk_subsystem 00:05:52.344 Calling clear_vhost_blk_subsystem 00:05:52.344 Calling clear_vhost_scsi_subsystem 00:05:52.344 Calling clear_scheduler_subsystem 00:05:52.344 Calling clear_bdev_subsystem 00:05:52.344 Calling clear_accel_subsystem 00:05:52.344 Calling clear_vmd_subsystem 00:05:52.344 Calling clear_sock_subsystem 00:05:52.344 Calling clear_iobuf_subsystem 00:05:52.603 18:14:50 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:52.603 18:14:50 -- json_config/json_config.sh@396 -- # count=100 00:05:52.603 18:14:50 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:52.603 18:14:50 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.603 18:14:50 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:52.603 18:14:50 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:52.861 18:14:51 -- json_config/json_config.sh@398 -- # break 00:05:52.861 18:14:51 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:52.861 18:14:51 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:52.861 18:14:51 -- json_config/json_config.sh@120 -- # local app=target 00:05:52.861 18:14:51 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:52.862 18:14:51 -- json_config/json_config.sh@124 -- # [[ -n 65817 ]] 00:05:52.862 18:14:51 -- json_config/json_config.sh@127 -- # kill -SIGINT 65817 00:05:52.862 18:14:51 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:52.862 18:14:51 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:52.862 18:14:51 -- json_config/json_config.sh@130 -- # kill -0 65817 00:05:52.862 18:14:51 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:53.429 18:14:51 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:53.429 18:14:51 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:53.429 18:14:51 -- json_config/json_config.sh@130 -- # kill -0 65817 00:05:53.429 SPDK target shutdown done 00:05:53.429 INFO: relaunching applications... 00:05:53.429 18:14:51 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:53.429 18:14:51 -- json_config/json_config.sh@132 -- # break 00:05:53.429 18:14:51 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:53.429 18:14:51 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:53.429 18:14:51 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:53.429 18:14:51 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.429 18:14:51 -- json_config/json_config.sh@98 -- # local app=target 00:05:53.429 18:14:51 -- json_config/json_config.sh@99 -- # shift 00:05:53.429 18:14:51 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:53.429 18:14:51 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:53.429 18:14:51 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:53.429 18:14:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:53.429 18:14:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:53.429 18:14:51 -- json_config/json_config.sh@111 -- # app_pid[$app]=66008 00:05:53.429 18:14:51 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.429 18:14:51 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:53.429 Waiting for target to run... 00:05:53.429 18:14:51 -- json_config/json_config.sh@114 -- # waitforlisten 66008 /var/tmp/spdk_tgt.sock 00:05:53.429 18:14:51 -- common/autotest_common.sh@829 -- # '[' -z 66008 ']' 00:05:53.429 18:14:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.429 18:14:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.429 18:14:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.429 18:14:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.429 18:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 [2024-11-17 18:14:51.598002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:53.429 [2024-11-17 18:14:51.598109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66008 ] 00:05:53.688 [2024-11-17 18:14:51.892242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.688 [2024-11-17 18:14:51.914048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.688 [2024-11-17 18:14:51.914205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.945 [2024-11-17 18:14:52.206721] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.203 [2024-11-17 18:14:52.238813] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.462 00:05:54.462 INFO: Checking if target configuration is the same... 00:05:54.462 18:14:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.462 18:14:52 -- common/autotest_common.sh@862 -- # return 0 00:05:54.462 18:14:52 -- json_config/json_config.sh@115 -- # echo '' 00:05:54.462 18:14:52 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:54.462 18:14:52 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.462 18:14:52 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.462 18:14:52 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:54.462 18:14:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.462 + '[' 2 -ne 2 ']' 00:05:54.462 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:54.462 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:54.462 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:54.462 +++ basename /dev/fd/62 00:05:54.462 ++ mktemp /tmp/62.XXX 00:05:54.462 + tmp_file_1=/tmp/62.kJQ 00:05:54.462 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.462 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.462 + tmp_file_2=/tmp/spdk_tgt_config.json.1bm 00:05:54.462 + ret=0 00:05:54.462 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.031 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.031 + diff -u /tmp/62.kJQ /tmp/spdk_tgt_config.json.1bm 00:05:55.031 INFO: JSON config files are the same 00:05:55.031 + echo 'INFO: JSON config files are the same' 00:05:55.031 + rm /tmp/62.kJQ /tmp/spdk_tgt_config.json.1bm 00:05:55.031 + exit 0 00:05:55.031 INFO: changing configuration and checking if this can be detected... 00:05:55.031 18:14:53 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:55.031 18:14:53 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.031 18:14:53 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.031 18:14:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.291 18:14:53 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.291 18:14:53 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:55.291 18:14:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.291 + '[' 2 -ne 2 ']' 00:05:55.291 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:55.291 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:55.291 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:55.291 +++ basename /dev/fd/62 00:05:55.291 ++ mktemp /tmp/62.XXX 00:05:55.291 + tmp_file_1=/tmp/62.Amu 00:05:55.291 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.291 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.291 + tmp_file_2=/tmp/spdk_tgt_config.json.7HV 00:05:55.291 + ret=0 00:05:55.291 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.550 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.550 + diff -u /tmp/62.Amu /tmp/spdk_tgt_config.json.7HV 00:05:55.550 + ret=1 00:05:55.550 + echo '=== Start of file: /tmp/62.Amu ===' 00:05:55.550 + cat /tmp/62.Amu 00:05:55.550 + echo '=== End of file: /tmp/62.Amu ===' 00:05:55.550 + echo '' 00:05:55.550 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7HV ===' 00:05:55.550 + cat /tmp/spdk_tgt_config.json.7HV 00:05:55.809 + echo '=== End of file: /tmp/spdk_tgt_config.json.7HV ===' 00:05:55.809 + echo '' 00:05:55.809 + rm /tmp/62.Amu /tmp/spdk_tgt_config.json.7HV 00:05:55.809 + exit 1 00:05:55.809 INFO: configuration change detected. 00:05:55.809 18:14:53 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:55.809 18:14:53 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:55.809 18:14:53 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:55.809 18:14:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.809 18:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.809 18:14:53 -- json_config/json_config.sh@360 -- # local ret=0 00:05:55.809 18:14:53 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:55.809 18:14:53 -- json_config/json_config.sh@370 -- # [[ -n 66008 ]] 00:05:55.809 18:14:53 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:55.809 18:14:53 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.809 18:14:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.809 18:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.809 18:14:53 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:55.809 18:14:53 -- json_config/json_config.sh@246 -- # uname -s 00:05:55.809 18:14:53 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:55.809 18:14:53 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:55.809 18:14:53 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:55.809 18:14:53 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.809 18:14:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.809 18:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.810 18:14:53 -- json_config/json_config.sh@376 -- # killprocess 66008 00:05:55.810 18:14:53 -- common/autotest_common.sh@936 -- # '[' -z 66008 ']' 00:05:55.810 18:14:53 -- common/autotest_common.sh@940 -- # kill -0 66008 00:05:55.810 18:14:53 -- common/autotest_common.sh@941 -- # uname 00:05:55.810 18:14:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.810 18:14:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66008 00:05:55.810 killing process with pid 66008 00:05:55.810 18:14:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.810 18:14:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.810 18:14:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66008' 00:05:55.810 18:14:53 -- common/autotest_common.sh@955 -- # kill 66008 00:05:55.810 18:14:53 -- common/autotest_common.sh@960 -- # wait 66008 00:05:56.069 18:14:54 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.069 18:14:54 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:56.069 18:14:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.069 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.069 INFO: Success 00:05:56.069 18:14:54 -- json_config/json_config.sh@381 -- # return 0 00:05:56.069 18:14:54 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:56.069 00:05:56.069 real 0m8.541s 00:05:56.069 user 0m12.525s 00:05:56.069 sys 0m1.499s 00:05:56.069 ************************************ 00:05:56.069 END TEST json_config 00:05:56.069 ************************************ 00:05:56.069 18:14:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.069 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.069 18:14:54 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.069 18:14:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.069 18:14:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.069 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.069 ************************************ 00:05:56.069 START TEST json_config_extra_key 00:05:56.069 ************************************ 00:05:56.069 18:14:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.069 18:14:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.069 18:14:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.069 18:14:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.069 18:14:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.069 18:14:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.069 18:14:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.069 18:14:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.069 18:14:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.069 18:14:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.069 18:14:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.069 18:14:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.069 18:14:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.069 18:14:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.069 18:14:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.069 18:14:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.069 18:14:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.069 18:14:54 -- scripts/common.sh@344 -- # : 1 00:05:56.069 18:14:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.069 18:14:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.069 18:14:54 -- scripts/common.sh@364 -- # decimal 1 00:05:56.069 18:14:54 -- scripts/common.sh@352 -- # local d=1 00:05:56.069 18:14:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.069 18:14:54 -- scripts/common.sh@354 -- # echo 1 00:05:56.069 18:14:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.069 18:14:54 -- scripts/common.sh@365 -- # decimal 2 00:05:56.069 18:14:54 -- scripts/common.sh@352 -- # local d=2 00:05:56.069 18:14:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.069 18:14:54 -- scripts/common.sh@354 -- # echo 2 00:05:56.069 18:14:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.069 18:14:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.069 18:14:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.069 18:14:54 -- scripts/common.sh@367 -- # return 0 00:05:56.069 18:14:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.069 18:14:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.069 --rc genhtml_branch_coverage=1 00:05:56.069 --rc genhtml_function_coverage=1 00:05:56.069 --rc genhtml_legend=1 00:05:56.069 --rc geninfo_all_blocks=1 00:05:56.069 --rc geninfo_unexecuted_blocks=1 00:05:56.069 00:05:56.069 ' 00:05:56.069 18:14:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.069 --rc genhtml_branch_coverage=1 00:05:56.069 --rc genhtml_function_coverage=1 00:05:56.069 --rc genhtml_legend=1 00:05:56.069 --rc geninfo_all_blocks=1 00:05:56.069 --rc geninfo_unexecuted_blocks=1 00:05:56.069 00:05:56.069 ' 00:05:56.069 18:14:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.069 --rc genhtml_branch_coverage=1 00:05:56.070 --rc genhtml_function_coverage=1 00:05:56.070 --rc genhtml_legend=1 00:05:56.070 --rc geninfo_all_blocks=1 00:05:56.070 --rc geninfo_unexecuted_blocks=1 00:05:56.070 00:05:56.070 ' 00:05:56.070 18:14:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.070 --rc genhtml_branch_coverage=1 00:05:56.070 --rc genhtml_function_coverage=1 00:05:56.070 --rc genhtml_legend=1 00:05:56.070 --rc geninfo_all_blocks=1 00:05:56.070 --rc geninfo_unexecuted_blocks=1 00:05:56.070 00:05:56.070 ' 00:05:56.070 18:14:54 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.070 18:14:54 -- nvmf/common.sh@7 -- # uname -s 00:05:56.329 18:14:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.329 18:14:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.329 18:14:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.329 18:14:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.329 18:14:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.329 18:14:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.329 18:14:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.329 18:14:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.329 18:14:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.329 18:14:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.329 18:14:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:05:56.329 18:14:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:05:56.329 18:14:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.329 18:14:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.329 18:14:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.329 18:14:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.329 18:14:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.329 18:14:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.329 18:14:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.329 18:14:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.329 18:14:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.329 18:14:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.329 18:14:54 -- paths/export.sh@5 -- # export PATH 00:05:56.329 18:14:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.329 18:14:54 -- nvmf/common.sh@46 -- # : 0 00:05:56.329 18:14:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:56.329 18:14:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:56.329 18:14:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:56.329 18:14:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.329 18:14:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.329 18:14:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:56.329 18:14:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:56.329 18:14:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:56.329 18:14:54 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.329 INFO: launching applications... 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66161 00:05:56.330 Waiting for target to run... 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:56.330 18:14:54 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66161 /var/tmp/spdk_tgt.sock 00:05:56.330 18:14:54 -- common/autotest_common.sh@829 -- # '[' -z 66161 ']' 00:05:56.330 18:14:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.330 18:14:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.330 18:14:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.330 18:14:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.330 18:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.330 [2024-11-17 18:14:54.410449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:56.330 [2024-11-17 18:14:54.410571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66161 ] 00:05:56.589 [2024-11-17 18:14:54.698129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.589 [2024-11-17 18:14:54.721929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.589 [2024-11-17 18:14:54.722135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.527 18:14:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.527 00:05:57.527 18:14:55 -- common/autotest_common.sh@862 -- # return 0 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:57.527 INFO: shutting down applications... 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66161 ]] 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66161 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66161 00:05:57.527 18:14:55 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66161 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:57.796 SPDK target shutdown done 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:57.796 Success 00:05:57.796 18:14:55 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:57.796 00:05:57.796 real 0m1.774s 00:05:57.796 user 0m1.682s 00:05:57.796 sys 0m0.314s 00:05:57.796 18:14:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.796 ************************************ 00:05:57.796 END TEST json_config_extra_key 00:05:57.796 ************************************ 00:05:57.796 18:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.796 18:14:55 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.796 18:14:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.796 18:14:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.796 18:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.796 ************************************ 00:05:57.796 START TEST alias_rpc 00:05:57.796 ************************************ 00:05:57.796 18:14:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.055 * Looking for test storage... 00:05:58.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:58.055 18:14:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.055 18:14:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.055 18:14:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.055 18:14:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.055 18:14:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.055 18:14:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.055 18:14:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.055 18:14:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.055 18:14:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.055 18:14:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.055 18:14:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.055 18:14:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.055 18:14:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.055 18:14:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.055 18:14:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.055 18:14:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.055 18:14:56 -- scripts/common.sh@344 -- # : 1 00:05:58.055 18:14:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.055 18:14:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.055 18:14:56 -- scripts/common.sh@364 -- # decimal 1 00:05:58.055 18:14:56 -- scripts/common.sh@352 -- # local d=1 00:05:58.055 18:14:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.055 18:14:56 -- scripts/common.sh@354 -- # echo 1 00:05:58.055 18:14:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.055 18:14:56 -- scripts/common.sh@365 -- # decimal 2 00:05:58.055 18:14:56 -- scripts/common.sh@352 -- # local d=2 00:05:58.055 18:14:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.055 18:14:56 -- scripts/common.sh@354 -- # echo 2 00:05:58.055 18:14:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.055 18:14:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.055 18:14:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.055 18:14:56 -- scripts/common.sh@367 -- # return 0 00:05:58.055 18:14:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.055 18:14:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.055 --rc genhtml_branch_coverage=1 00:05:58.055 --rc genhtml_function_coverage=1 00:05:58.055 --rc genhtml_legend=1 00:05:58.055 --rc geninfo_all_blocks=1 00:05:58.055 --rc geninfo_unexecuted_blocks=1 00:05:58.055 00:05:58.055 ' 00:05:58.055 18:14:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.055 --rc genhtml_branch_coverage=1 00:05:58.055 --rc genhtml_function_coverage=1 00:05:58.055 --rc genhtml_legend=1 00:05:58.055 --rc geninfo_all_blocks=1 00:05:58.055 --rc geninfo_unexecuted_blocks=1 00:05:58.055 00:05:58.055 ' 00:05:58.055 18:14:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.055 --rc genhtml_branch_coverage=1 00:05:58.055 --rc genhtml_function_coverage=1 00:05:58.055 --rc genhtml_legend=1 00:05:58.055 --rc geninfo_all_blocks=1 00:05:58.055 --rc geninfo_unexecuted_blocks=1 00:05:58.055 00:05:58.055 ' 00:05:58.055 18:14:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.055 --rc genhtml_branch_coverage=1 00:05:58.055 --rc genhtml_function_coverage=1 00:05:58.055 --rc genhtml_legend=1 00:05:58.055 --rc geninfo_all_blocks=1 00:05:58.055 --rc geninfo_unexecuted_blocks=1 00:05:58.055 00:05:58.055 ' 00:05:58.055 18:14:56 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.055 18:14:56 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66238 00:05:58.055 18:14:56 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66238 00:05:58.055 18:14:56 -- common/autotest_common.sh@829 -- # '[' -z 66238 ']' 00:05:58.055 18:14:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.055 18:14:56 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.055 18:14:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.055 18:14:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.055 18:14:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.055 18:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.055 [2024-11-17 18:14:56.245119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:58.055 [2024-11-17 18:14:56.245240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66238 ] 00:05:58.314 [2024-11-17 18:14:56.382374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.314 [2024-11-17 18:14:56.422351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.314 [2024-11-17 18:14:56.422546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.251 18:14:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.251 18:14:57 -- common/autotest_common.sh@862 -- # return 0 00:05:59.251 18:14:57 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:59.251 18:14:57 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66238 00:05:59.251 18:14:57 -- common/autotest_common.sh@936 -- # '[' -z 66238 ']' 00:05:59.251 18:14:57 -- common/autotest_common.sh@940 -- # kill -0 66238 00:05:59.251 18:14:57 -- common/autotest_common.sh@941 -- # uname 00:05:59.251 18:14:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.251 18:14:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66238 00:05:59.510 18:14:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.510 killing process with pid 66238 00:05:59.510 18:14:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.510 18:14:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66238' 00:05:59.510 18:14:57 -- common/autotest_common.sh@955 -- # kill 66238 00:05:59.510 18:14:57 -- common/autotest_common.sh@960 -- # wait 66238 00:05:59.510 00:05:59.510 real 0m1.739s 00:05:59.510 user 0m2.081s 00:05:59.510 sys 0m0.350s 00:05:59.510 18:14:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.510 18:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.510 ************************************ 00:05:59.510 END TEST alias_rpc 00:05:59.510 ************************************ 00:05:59.770 18:14:57 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:59.770 18:14:57 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:59.770 18:14:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.770 18:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.770 18:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.770 ************************************ 00:05:59.770 START TEST spdkcli_tcp 00:05:59.770 ************************************ 00:05:59.770 18:14:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:59.770 * Looking for test storage... 00:05:59.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:59.770 18:14:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.770 18:14:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.770 18:14:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.770 18:14:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.770 18:14:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.770 18:14:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.770 18:14:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.770 18:14:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.770 18:14:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.770 18:14:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.770 18:14:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.770 18:14:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.770 18:14:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.770 18:14:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.770 18:14:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.770 18:14:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.770 18:14:57 -- scripts/common.sh@344 -- # : 1 00:05:59.770 18:14:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.770 18:14:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.770 18:14:57 -- scripts/common.sh@364 -- # decimal 1 00:05:59.770 18:14:57 -- scripts/common.sh@352 -- # local d=1 00:05:59.770 18:14:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.770 18:14:57 -- scripts/common.sh@354 -- # echo 1 00:05:59.770 18:14:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.770 18:14:57 -- scripts/common.sh@365 -- # decimal 2 00:05:59.770 18:14:57 -- scripts/common.sh@352 -- # local d=2 00:05:59.770 18:14:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.770 18:14:57 -- scripts/common.sh@354 -- # echo 2 00:05:59.770 18:14:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.770 18:14:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.770 18:14:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.770 18:14:57 -- scripts/common.sh@367 -- # return 0 00:05:59.770 18:14:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.770 18:14:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.770 --rc genhtml_branch_coverage=1 00:05:59.770 --rc genhtml_function_coverage=1 00:05:59.770 --rc genhtml_legend=1 00:05:59.770 --rc geninfo_all_blocks=1 00:05:59.770 --rc geninfo_unexecuted_blocks=1 00:05:59.770 00:05:59.770 ' 00:05:59.770 18:14:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.770 --rc genhtml_branch_coverage=1 00:05:59.770 --rc genhtml_function_coverage=1 00:05:59.770 --rc genhtml_legend=1 00:05:59.770 --rc geninfo_all_blocks=1 00:05:59.770 --rc geninfo_unexecuted_blocks=1 00:05:59.770 00:05:59.770 ' 00:05:59.770 18:14:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.770 --rc genhtml_branch_coverage=1 00:05:59.770 --rc genhtml_function_coverage=1 00:05:59.770 --rc genhtml_legend=1 00:05:59.770 --rc geninfo_all_blocks=1 00:05:59.770 --rc geninfo_unexecuted_blocks=1 00:05:59.770 00:05:59.770 ' 00:05:59.770 18:14:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.770 --rc genhtml_branch_coverage=1 00:05:59.770 --rc genhtml_function_coverage=1 00:05:59.770 --rc genhtml_legend=1 00:05:59.770 --rc geninfo_all_blocks=1 00:05:59.770 --rc geninfo_unexecuted_blocks=1 00:05:59.770 00:05:59.770 ' 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:59.770 18:14:57 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:59.770 18:14:57 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:59.770 18:14:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.770 18:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66310 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@27 -- # waitforlisten 66310 00:05:59.770 18:14:57 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:59.770 18:14:57 -- common/autotest_common.sh@829 -- # '[' -z 66310 ']' 00:05:59.771 18:14:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.771 18:14:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.771 18:14:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.771 18:14:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.771 18:14:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.030 [2024-11-17 18:14:58.057119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:00.030 [2024-11-17 18:14:58.057855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66310 ] 00:06:00.030 [2024-11-17 18:14:58.201860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.030 [2024-11-17 18:14:58.242606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.030 [2024-11-17 18:14:58.242943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.030 [2024-11-17 18:14:58.242956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.966 18:14:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.966 18:14:59 -- common/autotest_common.sh@862 -- # return 0 00:06:00.966 18:14:59 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.966 18:14:59 -- spdkcli/tcp.sh@31 -- # socat_pid=66327 00:06:00.966 18:14:59 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.225 [ 00:06:01.225 "bdev_malloc_delete", 00:06:01.225 "bdev_malloc_create", 00:06:01.225 "bdev_null_resize", 00:06:01.225 "bdev_null_delete", 00:06:01.225 "bdev_null_create", 00:06:01.225 "bdev_nvme_cuse_unregister", 00:06:01.225 "bdev_nvme_cuse_register", 00:06:01.225 "bdev_opal_new_user", 00:06:01.225 "bdev_opal_set_lock_state", 00:06:01.225 "bdev_opal_delete", 00:06:01.225 "bdev_opal_get_info", 00:06:01.225 "bdev_opal_create", 00:06:01.225 "bdev_nvme_opal_revert", 00:06:01.225 "bdev_nvme_opal_init", 00:06:01.225 "bdev_nvme_send_cmd", 00:06:01.225 "bdev_nvme_get_path_iostat", 00:06:01.225 "bdev_nvme_get_mdns_discovery_info", 00:06:01.225 "bdev_nvme_stop_mdns_discovery", 00:06:01.225 "bdev_nvme_start_mdns_discovery", 00:06:01.225 "bdev_nvme_set_multipath_policy", 00:06:01.225 "bdev_nvme_set_preferred_path", 00:06:01.225 "bdev_nvme_get_io_paths", 00:06:01.225 "bdev_nvme_remove_error_injection", 00:06:01.225 "bdev_nvme_add_error_injection", 00:06:01.225 "bdev_nvme_get_discovery_info", 00:06:01.225 "bdev_nvme_stop_discovery", 00:06:01.225 "bdev_nvme_start_discovery", 00:06:01.225 "bdev_nvme_get_controller_health_info", 00:06:01.225 "bdev_nvme_disable_controller", 00:06:01.225 "bdev_nvme_enable_controller", 00:06:01.225 "bdev_nvme_reset_controller", 00:06:01.225 "bdev_nvme_get_transport_statistics", 00:06:01.225 "bdev_nvme_apply_firmware", 00:06:01.225 "bdev_nvme_detach_controller", 00:06:01.225 "bdev_nvme_get_controllers", 00:06:01.225 "bdev_nvme_attach_controller", 00:06:01.225 "bdev_nvme_set_hotplug", 00:06:01.225 "bdev_nvme_set_options", 00:06:01.225 "bdev_passthru_delete", 00:06:01.225 "bdev_passthru_create", 00:06:01.225 "bdev_lvol_grow_lvstore", 00:06:01.225 "bdev_lvol_get_lvols", 00:06:01.225 "bdev_lvol_get_lvstores", 00:06:01.225 "bdev_lvol_delete", 00:06:01.225 "bdev_lvol_set_read_only", 00:06:01.225 "bdev_lvol_resize", 00:06:01.225 "bdev_lvol_decouple_parent", 00:06:01.225 "bdev_lvol_inflate", 00:06:01.225 "bdev_lvol_rename", 00:06:01.225 "bdev_lvol_clone_bdev", 00:06:01.225 "bdev_lvol_clone", 00:06:01.225 "bdev_lvol_snapshot", 00:06:01.225 "bdev_lvol_create", 00:06:01.225 "bdev_lvol_delete_lvstore", 00:06:01.225 "bdev_lvol_rename_lvstore", 00:06:01.225 "bdev_lvol_create_lvstore", 00:06:01.225 "bdev_raid_set_options", 00:06:01.225 "bdev_raid_remove_base_bdev", 00:06:01.225 "bdev_raid_add_base_bdev", 00:06:01.225 "bdev_raid_delete", 00:06:01.225 "bdev_raid_create", 00:06:01.225 "bdev_raid_get_bdevs", 00:06:01.225 "bdev_error_inject_error", 00:06:01.225 "bdev_error_delete", 00:06:01.225 "bdev_error_create", 00:06:01.225 "bdev_split_delete", 00:06:01.225 "bdev_split_create", 00:06:01.225 "bdev_delay_delete", 00:06:01.225 "bdev_delay_create", 00:06:01.225 "bdev_delay_update_latency", 00:06:01.225 "bdev_zone_block_delete", 00:06:01.225 "bdev_zone_block_create", 00:06:01.225 "blobfs_create", 00:06:01.225 "blobfs_detect", 00:06:01.225 "blobfs_set_cache_size", 00:06:01.225 "bdev_aio_delete", 00:06:01.225 "bdev_aio_rescan", 00:06:01.225 "bdev_aio_create", 00:06:01.225 "bdev_ftl_set_property", 00:06:01.225 "bdev_ftl_get_properties", 00:06:01.225 "bdev_ftl_get_stats", 00:06:01.225 "bdev_ftl_unmap", 00:06:01.225 "bdev_ftl_unload", 00:06:01.225 "bdev_ftl_delete", 00:06:01.226 "bdev_ftl_load", 00:06:01.226 "bdev_ftl_create", 00:06:01.226 "bdev_virtio_attach_controller", 00:06:01.226 "bdev_virtio_scsi_get_devices", 00:06:01.226 "bdev_virtio_detach_controller", 00:06:01.226 "bdev_virtio_blk_set_hotplug", 00:06:01.226 "bdev_iscsi_delete", 00:06:01.226 "bdev_iscsi_create", 00:06:01.226 "bdev_iscsi_set_options", 00:06:01.226 "bdev_uring_delete", 00:06:01.226 "bdev_uring_create", 00:06:01.226 "accel_error_inject_error", 00:06:01.226 "ioat_scan_accel_module", 00:06:01.226 "dsa_scan_accel_module", 00:06:01.226 "iaa_scan_accel_module", 00:06:01.226 "iscsi_set_options", 00:06:01.226 "iscsi_get_auth_groups", 00:06:01.226 "iscsi_auth_group_remove_secret", 00:06:01.226 "iscsi_auth_group_add_secret", 00:06:01.226 "iscsi_delete_auth_group", 00:06:01.226 "iscsi_create_auth_group", 00:06:01.226 "iscsi_set_discovery_auth", 00:06:01.226 "iscsi_get_options", 00:06:01.226 "iscsi_target_node_request_logout", 00:06:01.226 "iscsi_target_node_set_redirect", 00:06:01.226 "iscsi_target_node_set_auth", 00:06:01.226 "iscsi_target_node_add_lun", 00:06:01.226 "iscsi_get_connections", 00:06:01.226 "iscsi_portal_group_set_auth", 00:06:01.226 "iscsi_start_portal_group", 00:06:01.226 "iscsi_delete_portal_group", 00:06:01.226 "iscsi_create_portal_group", 00:06:01.226 "iscsi_get_portal_groups", 00:06:01.226 "iscsi_delete_target_node", 00:06:01.226 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.226 "iscsi_target_node_add_pg_ig_maps", 00:06:01.226 "iscsi_create_target_node", 00:06:01.226 "iscsi_get_target_nodes", 00:06:01.226 "iscsi_delete_initiator_group", 00:06:01.226 "iscsi_initiator_group_remove_initiators", 00:06:01.226 "iscsi_initiator_group_add_initiators", 00:06:01.226 "iscsi_create_initiator_group", 00:06:01.226 "iscsi_get_initiator_groups", 00:06:01.226 "nvmf_set_crdt", 00:06:01.226 "nvmf_set_config", 00:06:01.226 "nvmf_set_max_subsystems", 00:06:01.226 "nvmf_subsystem_get_listeners", 00:06:01.226 "nvmf_subsystem_get_qpairs", 00:06:01.226 "nvmf_subsystem_get_controllers", 00:06:01.226 "nvmf_get_stats", 00:06:01.226 "nvmf_get_transports", 00:06:01.226 "nvmf_create_transport", 00:06:01.226 "nvmf_get_targets", 00:06:01.226 "nvmf_delete_target", 00:06:01.226 "nvmf_create_target", 00:06:01.226 "nvmf_subsystem_allow_any_host", 00:06:01.226 "nvmf_subsystem_remove_host", 00:06:01.226 "nvmf_subsystem_add_host", 00:06:01.226 "nvmf_subsystem_remove_ns", 00:06:01.226 "nvmf_subsystem_add_ns", 00:06:01.226 "nvmf_subsystem_listener_set_ana_state", 00:06:01.226 "nvmf_discovery_get_referrals", 00:06:01.226 "nvmf_discovery_remove_referral", 00:06:01.226 "nvmf_discovery_add_referral", 00:06:01.226 "nvmf_subsystem_remove_listener", 00:06:01.226 "nvmf_subsystem_add_listener", 00:06:01.226 "nvmf_delete_subsystem", 00:06:01.226 "nvmf_create_subsystem", 00:06:01.226 "nvmf_get_subsystems", 00:06:01.226 "env_dpdk_get_mem_stats", 00:06:01.226 "nbd_get_disks", 00:06:01.226 "nbd_stop_disk", 00:06:01.226 "nbd_start_disk", 00:06:01.226 "ublk_recover_disk", 00:06:01.226 "ublk_get_disks", 00:06:01.226 "ublk_stop_disk", 00:06:01.226 "ublk_start_disk", 00:06:01.226 "ublk_destroy_target", 00:06:01.226 "ublk_create_target", 00:06:01.226 "virtio_blk_create_transport", 00:06:01.226 "virtio_blk_get_transports", 00:06:01.226 "vhost_controller_set_coalescing", 00:06:01.226 "vhost_get_controllers", 00:06:01.226 "vhost_delete_controller", 00:06:01.226 "vhost_create_blk_controller", 00:06:01.226 "vhost_scsi_controller_remove_target", 00:06:01.226 "vhost_scsi_controller_add_target", 00:06:01.226 "vhost_start_scsi_controller", 00:06:01.226 "vhost_create_scsi_controller", 00:06:01.226 "thread_set_cpumask", 00:06:01.226 "framework_get_scheduler", 00:06:01.226 "framework_set_scheduler", 00:06:01.226 "framework_get_reactors", 00:06:01.226 "thread_get_io_channels", 00:06:01.226 "thread_get_pollers", 00:06:01.226 "thread_get_stats", 00:06:01.226 "framework_monitor_context_switch", 00:06:01.226 "spdk_kill_instance", 00:06:01.226 "log_enable_timestamps", 00:06:01.226 "log_get_flags", 00:06:01.226 "log_clear_flag", 00:06:01.226 "log_set_flag", 00:06:01.226 "log_get_level", 00:06:01.226 "log_set_level", 00:06:01.226 "log_get_print_level", 00:06:01.226 "log_set_print_level", 00:06:01.226 "framework_enable_cpumask_locks", 00:06:01.226 "framework_disable_cpumask_locks", 00:06:01.226 "framework_wait_init", 00:06:01.226 "framework_start_init", 00:06:01.226 "scsi_get_devices", 00:06:01.226 "bdev_get_histogram", 00:06:01.226 "bdev_enable_histogram", 00:06:01.226 "bdev_set_qos_limit", 00:06:01.226 "bdev_set_qd_sampling_period", 00:06:01.226 "bdev_get_bdevs", 00:06:01.226 "bdev_reset_iostat", 00:06:01.226 "bdev_get_iostat", 00:06:01.226 "bdev_examine", 00:06:01.226 "bdev_wait_for_examine", 00:06:01.226 "bdev_set_options", 00:06:01.226 "notify_get_notifications", 00:06:01.226 "notify_get_types", 00:06:01.226 "accel_get_stats", 00:06:01.226 "accel_set_options", 00:06:01.226 "accel_set_driver", 00:06:01.226 "accel_crypto_key_destroy", 00:06:01.226 "accel_crypto_keys_get", 00:06:01.226 "accel_crypto_key_create", 00:06:01.226 "accel_assign_opc", 00:06:01.226 "accel_get_module_info", 00:06:01.226 "accel_get_opc_assignments", 00:06:01.226 "vmd_rescan", 00:06:01.226 "vmd_remove_device", 00:06:01.226 "vmd_enable", 00:06:01.226 "sock_set_default_impl", 00:06:01.226 "sock_impl_set_options", 00:06:01.226 "sock_impl_get_options", 00:06:01.226 "iobuf_get_stats", 00:06:01.226 "iobuf_set_options", 00:06:01.226 "framework_get_pci_devices", 00:06:01.226 "framework_get_config", 00:06:01.226 "framework_get_subsystems", 00:06:01.226 "trace_get_info", 00:06:01.226 "trace_get_tpoint_group_mask", 00:06:01.226 "trace_disable_tpoint_group", 00:06:01.226 "trace_enable_tpoint_group", 00:06:01.226 "trace_clear_tpoint_mask", 00:06:01.226 "trace_set_tpoint_mask", 00:06:01.226 "spdk_get_version", 00:06:01.226 "rpc_get_methods" 00:06:01.226 ] 00:06:01.226 18:14:59 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.226 18:14:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.226 18:14:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.226 18:14:59 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.226 18:14:59 -- spdkcli/tcp.sh@38 -- # killprocess 66310 00:06:01.226 18:14:59 -- common/autotest_common.sh@936 -- # '[' -z 66310 ']' 00:06:01.226 18:14:59 -- common/autotest_common.sh@940 -- # kill -0 66310 00:06:01.226 18:14:59 -- common/autotest_common.sh@941 -- # uname 00:06:01.226 18:14:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.226 18:14:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66310 00:06:01.226 18:14:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.226 18:14:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.226 killing process with pid 66310 00:06:01.226 18:14:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66310' 00:06:01.226 18:14:59 -- common/autotest_common.sh@955 -- # kill 66310 00:06:01.226 18:14:59 -- common/autotest_common.sh@960 -- # wait 66310 00:06:01.485 00:06:01.485 real 0m1.826s 00:06:01.485 user 0m3.524s 00:06:01.485 sys 0m0.418s 00:06:01.485 18:14:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.485 18:14:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.485 ************************************ 00:06:01.485 END TEST spdkcli_tcp 00:06:01.485 ************************************ 00:06:01.485 18:14:59 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.485 18:14:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.485 18:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.485 18:14:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.485 ************************************ 00:06:01.485 START TEST dpdk_mem_utility 00:06:01.485 ************************************ 00:06:01.485 18:14:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.485 * Looking for test storage... 00:06:01.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:01.745 18:14:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.745 18:14:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.745 18:14:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.745 18:14:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.745 18:14:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.745 18:14:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.745 18:14:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.745 18:14:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.745 18:14:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.745 18:14:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.745 18:14:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.745 18:14:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.745 18:14:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.745 18:14:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.745 18:14:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.745 18:14:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.745 18:14:59 -- scripts/common.sh@344 -- # : 1 00:06:01.745 18:14:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.745 18:14:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.745 18:14:59 -- scripts/common.sh@364 -- # decimal 1 00:06:01.745 18:14:59 -- scripts/common.sh@352 -- # local d=1 00:06:01.745 18:14:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.745 18:14:59 -- scripts/common.sh@354 -- # echo 1 00:06:01.745 18:14:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.745 18:14:59 -- scripts/common.sh@365 -- # decimal 2 00:06:01.745 18:14:59 -- scripts/common.sh@352 -- # local d=2 00:06:01.745 18:14:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.745 18:14:59 -- scripts/common.sh@354 -- # echo 2 00:06:01.745 18:14:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.745 18:14:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.745 18:14:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.745 18:14:59 -- scripts/common.sh@367 -- # return 0 00:06:01.745 18:14:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.745 18:14:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 18:14:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 18:14:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 18:14:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.745 --rc genhtml_branch_coverage=1 00:06:01.745 --rc genhtml_function_coverage=1 00:06:01.745 --rc genhtml_legend=1 00:06:01.745 --rc geninfo_all_blocks=1 00:06:01.745 --rc geninfo_unexecuted_blocks=1 00:06:01.745 00:06:01.745 ' 00:06:01.745 18:14:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.745 18:14:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66408 00:06:01.745 18:14:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.745 18:14:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66408 00:06:01.745 18:14:59 -- common/autotest_common.sh@829 -- # '[' -z 66408 ']' 00:06:01.745 18:14:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.745 18:14:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.745 18:14:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.745 18:14:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.745 18:14:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.745 [2024-11-17 18:14:59.908306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:01.745 [2024-11-17 18:14:59.908411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66408 ] 00:06:02.004 [2024-11-17 18:15:00.042125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.004 [2024-11-17 18:15:00.074324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.004 [2024-11-17 18:15:00.074527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.572 18:15:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.572 18:15:00 -- common/autotest_common.sh@862 -- # return 0 00:06:02.572 18:15:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.572 18:15:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.572 18:15:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.572 18:15:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.572 { 00:06:02.572 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.572 } 00:06:02.572 18:15:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.572 18:15:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.832 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:02.832 1 heaps totaling size 814.000000 MiB 00:06:02.832 size: 814.000000 MiB heap id: 0 00:06:02.832 end heaps---------- 00:06:02.832 8 mempools totaling size 598.116089 MiB 00:06:02.832 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.832 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.832 size: 84.521057 MiB name: bdev_io_66408 00:06:02.832 size: 51.011292 MiB name: evtpool_66408 00:06:02.832 size: 50.003479 MiB name: msgpool_66408 00:06:02.832 size: 21.763794 MiB name: PDU_Pool 00:06:02.832 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.832 size: 0.026123 MiB name: Session_Pool 00:06:02.832 end mempools------- 00:06:02.832 6 memzones totaling size 4.142822 MiB 00:06:02.832 size: 1.000366 MiB name: RG_ring_0_66408 00:06:02.832 size: 1.000366 MiB name: RG_ring_1_66408 00:06:02.832 size: 1.000366 MiB name: RG_ring_4_66408 00:06:02.832 size: 1.000366 MiB name: RG_ring_5_66408 00:06:02.832 size: 0.125366 MiB name: RG_ring_2_66408 00:06:02.832 size: 0.015991 MiB name: RG_ring_3_66408 00:06:02.832 end memzones------- 00:06:02.832 18:15:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.832 heap id: 0 total size: 814.000000 MiB number of busy elements: 308 number of free elements: 15 00:06:02.832 list of free elements. size: 12.470459 MiB 00:06:02.832 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.832 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.832 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.832 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.832 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.832 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.832 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.832 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.832 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:02.832 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:06:02.832 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:02.832 element at address: 0x200000800000 with size: 0.486328 MiB 00:06:02.832 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.832 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:02.832 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:02.833 list of standard malloc elements. size: 199.266968 MiB 00:06:02.833 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.833 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.833 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.833 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.833 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.833 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.833 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.833 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.833 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.833 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.833 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:02.833 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.834 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:02.834 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:02.835 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:02.835 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:02.835 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.835 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.835 list of memzone associated elements. size: 602.262573 MiB 00:06:02.835 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.835 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.835 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.835 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.835 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.835 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66408_0 00:06:02.835 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.835 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66408_0 00:06:02.835 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.835 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66408_0 00:06:02.835 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.835 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.835 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.835 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.835 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.835 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66408 00:06:02.835 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.835 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66408 00:06:02.835 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.835 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66408 00:06:02.835 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.835 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.835 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.835 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.835 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.835 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.835 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.835 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.835 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.835 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66408 00:06:02.835 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.835 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66408 00:06:02.835 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.835 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66408 00:06:02.835 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.835 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66408 00:06:02.835 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.835 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66408 00:06:02.835 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.835 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.835 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.835 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.835 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.835 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.835 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.835 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66408 00:06:02.835 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.835 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.835 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:02.835 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.835 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.835 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66408 00:06:02.835 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:02.835 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.835 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:02.835 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66408 00:06:02.835 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.835 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66408 00:06:02.835 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:02.835 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.835 18:15:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.835 18:15:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66408 00:06:02.835 18:15:00 -- common/autotest_common.sh@936 -- # '[' -z 66408 ']' 00:06:02.835 18:15:00 -- common/autotest_common.sh@940 -- # kill -0 66408 00:06:02.835 18:15:00 -- common/autotest_common.sh@941 -- # uname 00:06:02.835 18:15:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.835 18:15:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66408 00:06:02.835 18:15:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.835 18:15:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.835 killing process with pid 66408 00:06:02.835 18:15:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66408' 00:06:02.835 18:15:01 -- common/autotest_common.sh@955 -- # kill 66408 00:06:02.835 18:15:01 -- common/autotest_common.sh@960 -- # wait 66408 00:06:03.094 00:06:03.094 real 0m1.550s 00:06:03.094 user 0m1.727s 00:06:03.094 sys 0m0.342s 00:06:03.094 18:15:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.094 18:15:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.094 ************************************ 00:06:03.094 END TEST dpdk_mem_utility 00:06:03.095 ************************************ 00:06:03.095 18:15:01 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.095 18:15:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.095 18:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.095 18:15:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.095 ************************************ 00:06:03.095 START TEST event 00:06:03.095 ************************************ 00:06:03.095 18:15:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.095 * Looking for test storage... 00:06:03.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:03.095 18:15:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:03.095 18:15:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:03.095 18:15:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:03.353 18:15:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:03.354 18:15:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:03.354 18:15:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:03.354 18:15:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:03.354 18:15:01 -- scripts/common.sh@335 -- # IFS=.-: 00:06:03.354 18:15:01 -- scripts/common.sh@335 -- # read -ra ver1 00:06:03.354 18:15:01 -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.354 18:15:01 -- scripts/common.sh@336 -- # read -ra ver2 00:06:03.354 18:15:01 -- scripts/common.sh@337 -- # local 'op=<' 00:06:03.354 18:15:01 -- scripts/common.sh@339 -- # ver1_l=2 00:06:03.354 18:15:01 -- scripts/common.sh@340 -- # ver2_l=1 00:06:03.354 18:15:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:03.354 18:15:01 -- scripts/common.sh@343 -- # case "$op" in 00:06:03.354 18:15:01 -- scripts/common.sh@344 -- # : 1 00:06:03.354 18:15:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:03.354 18:15:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.354 18:15:01 -- scripts/common.sh@364 -- # decimal 1 00:06:03.354 18:15:01 -- scripts/common.sh@352 -- # local d=1 00:06:03.354 18:15:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.354 18:15:01 -- scripts/common.sh@354 -- # echo 1 00:06:03.354 18:15:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:03.354 18:15:01 -- scripts/common.sh@365 -- # decimal 2 00:06:03.354 18:15:01 -- scripts/common.sh@352 -- # local d=2 00:06:03.354 18:15:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.354 18:15:01 -- scripts/common.sh@354 -- # echo 2 00:06:03.354 18:15:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:03.354 18:15:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:03.354 18:15:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:03.354 18:15:01 -- scripts/common.sh@367 -- # return 0 00:06:03.354 18:15:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.354 18:15:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.354 --rc genhtml_branch_coverage=1 00:06:03.354 --rc genhtml_function_coverage=1 00:06:03.354 --rc genhtml_legend=1 00:06:03.354 --rc geninfo_all_blocks=1 00:06:03.354 --rc geninfo_unexecuted_blocks=1 00:06:03.354 00:06:03.354 ' 00:06:03.354 18:15:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.354 --rc genhtml_branch_coverage=1 00:06:03.354 --rc genhtml_function_coverage=1 00:06:03.354 --rc genhtml_legend=1 00:06:03.354 --rc geninfo_all_blocks=1 00:06:03.354 --rc geninfo_unexecuted_blocks=1 00:06:03.354 00:06:03.354 ' 00:06:03.354 18:15:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.354 --rc genhtml_branch_coverage=1 00:06:03.354 --rc genhtml_function_coverage=1 00:06:03.354 --rc genhtml_legend=1 00:06:03.354 --rc geninfo_all_blocks=1 00:06:03.354 --rc geninfo_unexecuted_blocks=1 00:06:03.354 00:06:03.354 ' 00:06:03.354 18:15:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.354 --rc genhtml_branch_coverage=1 00:06:03.354 --rc genhtml_function_coverage=1 00:06:03.354 --rc genhtml_legend=1 00:06:03.354 --rc geninfo_all_blocks=1 00:06:03.354 --rc geninfo_unexecuted_blocks=1 00:06:03.354 00:06:03.354 ' 00:06:03.354 18:15:01 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.354 18:15:01 -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.354 18:15:01 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.354 18:15:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:03.354 18:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.354 18:15:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.354 ************************************ 00:06:03.354 START TEST event_perf 00:06:03.354 ************************************ 00:06:03.354 18:15:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.354 Running I/O for 1 seconds...[2024-11-17 18:15:01.490537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:03.354 [2024-11-17 18:15:01.490770] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66492 ] 00:06:03.612 [2024-11-17 18:15:01.625795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.612 [2024-11-17 18:15:01.665478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.612 [2024-11-17 18:15:01.665641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.612 [2024-11-17 18:15:01.665725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.612 Running I/O for 1 seconds...[2024-11-17 18:15:01.665725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.548 00:06:04.548 lcore 0: 193593 00:06:04.548 lcore 1: 193593 00:06:04.548 lcore 2: 193591 00:06:04.548 lcore 3: 193593 00:06:04.548 done. 00:06:04.548 00:06:04.548 real 0m1.252s 00:06:04.548 user 0m4.075s 00:06:04.548 sys 0m0.052s 00:06:04.548 18:15:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.548 ************************************ 00:06:04.548 END TEST event_perf 00:06:04.548 ************************************ 00:06:04.548 18:15:02 -- common/autotest_common.sh@10 -- # set +x 00:06:04.548 18:15:02 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.548 18:15:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:04.548 18:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.548 18:15:02 -- common/autotest_common.sh@10 -- # set +x 00:06:04.548 ************************************ 00:06:04.548 START TEST event_reactor 00:06:04.548 ************************************ 00:06:04.548 18:15:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.548 [2024-11-17 18:15:02.797876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:04.548 [2024-11-17 18:15:02.798182] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66525 ] 00:06:04.807 [2024-11-17 18:15:02.934955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.807 [2024-11-17 18:15:02.970362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.184 test_start 00:06:06.184 oneshot 00:06:06.184 tick 100 00:06:06.184 tick 100 00:06:06.184 tick 250 00:06:06.184 tick 100 00:06:06.184 tick 100 00:06:06.184 tick 100 00:06:06.184 tick 250 00:06:06.184 tick 500 00:06:06.184 tick 100 00:06:06.184 tick 100 00:06:06.184 tick 250 00:06:06.184 tick 100 00:06:06.184 tick 100 00:06:06.184 test_end 00:06:06.184 00:06:06.184 real 0m1.251s 00:06:06.184 user 0m1.106s 00:06:06.184 sys 0m0.037s 00:06:06.184 18:15:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.184 ************************************ 00:06:06.184 END TEST event_reactor 00:06:06.184 ************************************ 00:06:06.184 18:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.184 18:15:04 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.184 18:15:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:06.184 18:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.184 18:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.184 ************************************ 00:06:06.184 START TEST event_reactor_perf 00:06:06.184 ************************************ 00:06:06.184 18:15:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.184 [2024-11-17 18:15:04.105074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.184 [2024-11-17 18:15:04.105183] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66555 ] 00:06:06.184 [2024-11-17 18:15:04.237004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.184 [2024-11-17 18:15:04.273946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.120 test_start 00:06:07.120 test_end 00:06:07.120 Performance: 446941 events per second 00:06:07.120 ************************************ 00:06:07.120 END TEST event_reactor_perf 00:06:07.120 ************************************ 00:06:07.120 00:06:07.120 real 0m1.237s 00:06:07.120 user 0m1.088s 00:06:07.120 sys 0m0.042s 00:06:07.120 18:15:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.120 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.120 18:15:05 -- event/event.sh@49 -- # uname -s 00:06:07.120 18:15:05 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.120 18:15:05 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.120 18:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.120 18:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.120 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.121 ************************************ 00:06:07.121 START TEST event_scheduler 00:06:07.121 ************************************ 00:06:07.121 18:15:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.380 * Looking for test storage... 00:06:07.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:07.380 18:15:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.380 18:15:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.380 18:15:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:07.380 18:15:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:07.380 18:15:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:07.380 18:15:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:07.380 18:15:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:07.380 18:15:05 -- scripts/common.sh@335 -- # IFS=.-: 00:06:07.380 18:15:05 -- scripts/common.sh@335 -- # read -ra ver1 00:06:07.380 18:15:05 -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.380 18:15:05 -- scripts/common.sh@336 -- # read -ra ver2 00:06:07.380 18:15:05 -- scripts/common.sh@337 -- # local 'op=<' 00:06:07.380 18:15:05 -- scripts/common.sh@339 -- # ver1_l=2 00:06:07.380 18:15:05 -- scripts/common.sh@340 -- # ver2_l=1 00:06:07.380 18:15:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:07.380 18:15:05 -- scripts/common.sh@343 -- # case "$op" in 00:06:07.380 18:15:05 -- scripts/common.sh@344 -- # : 1 00:06:07.380 18:15:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:07.380 18:15:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.380 18:15:05 -- scripts/common.sh@364 -- # decimal 1 00:06:07.380 18:15:05 -- scripts/common.sh@352 -- # local d=1 00:06:07.380 18:15:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.380 18:15:05 -- scripts/common.sh@354 -- # echo 1 00:06:07.380 18:15:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:07.380 18:15:05 -- scripts/common.sh@365 -- # decimal 2 00:06:07.380 18:15:05 -- scripts/common.sh@352 -- # local d=2 00:06:07.380 18:15:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.380 18:15:05 -- scripts/common.sh@354 -- # echo 2 00:06:07.380 18:15:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:07.380 18:15:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:07.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.380 18:15:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:07.380 18:15:05 -- scripts/common.sh@367 -- # return 0 00:06:07.380 18:15:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.380 18:15:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.380 --rc genhtml_branch_coverage=1 00:06:07.380 --rc genhtml_function_coverage=1 00:06:07.380 --rc genhtml_legend=1 00:06:07.380 --rc geninfo_all_blocks=1 00:06:07.380 --rc geninfo_unexecuted_blocks=1 00:06:07.380 00:06:07.380 ' 00:06:07.380 18:15:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.380 --rc genhtml_branch_coverage=1 00:06:07.380 --rc genhtml_function_coverage=1 00:06:07.380 --rc genhtml_legend=1 00:06:07.380 --rc geninfo_all_blocks=1 00:06:07.380 --rc geninfo_unexecuted_blocks=1 00:06:07.380 00:06:07.380 ' 00:06:07.380 18:15:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.380 --rc genhtml_branch_coverage=1 00:06:07.380 --rc genhtml_function_coverage=1 00:06:07.380 --rc genhtml_legend=1 00:06:07.380 --rc geninfo_all_blocks=1 00:06:07.380 --rc geninfo_unexecuted_blocks=1 00:06:07.380 00:06:07.380 ' 00:06:07.380 18:15:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.380 --rc genhtml_branch_coverage=1 00:06:07.380 --rc genhtml_function_coverage=1 00:06:07.380 --rc genhtml_legend=1 00:06:07.380 --rc geninfo_all_blocks=1 00:06:07.380 --rc geninfo_unexecuted_blocks=1 00:06:07.380 00:06:07.380 ' 00:06:07.380 18:15:05 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.381 18:15:05 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66629 00:06:07.381 18:15:05 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.381 18:15:05 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.381 18:15:05 -- scheduler/scheduler.sh@37 -- # waitforlisten 66629 00:06:07.381 18:15:05 -- common/autotest_common.sh@829 -- # '[' -z 66629 ']' 00:06:07.381 18:15:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.381 18:15:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.381 18:15:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.381 18:15:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.381 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.381 [2024-11-17 18:15:05.612603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:07.381 [2024-11-17 18:15:05.612881] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66629 ] 00:06:07.640 [2024-11-17 18:15:05.745781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.640 [2024-11-17 18:15:05.779210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.640 [2024-11-17 18:15:05.779352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.640 [2024-11-17 18:15:05.779480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.640 [2024-11-17 18:15:05.779483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.640 18:15:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.640 18:15:05 -- common/autotest_common.sh@862 -- # return 0 00:06:07.640 18:15:05 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.640 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.640 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 POWER: Env isn't set yet! 00:06:07.640 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:07.640 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.640 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.640 POWER: Attempting to initialise PSTAT power management... 00:06:07.640 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.640 POWER: Cannot set governor of lcore 0 to performance 00:06:07.640 POWER: Attempting to initialise CPPC power management... 00:06:07.640 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.640 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.640 POWER: Attempting to initialise VM power management... 00:06:07.640 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:07.640 POWER: Unable to set Power Management Environment for lcore 0 00:06:07.640 [2024-11-17 18:15:05.861677] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:07.640 [2024-11-17 18:15:05.861689] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:07.640 [2024-11-17 18:15:05.861698] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:07.640 [2024-11-17 18:15:05.861709] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:07.640 [2024-11-17 18:15:05.861716] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:07.640 [2024-11-17 18:15:05.861723] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:07.640 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.640 18:15:05 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.640 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.640 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 [2024-11-17 18:15:05.908475] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.900 18:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.900 18:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 ************************************ 00:06:07.900 START TEST scheduler_create_thread 00:06:07.900 ************************************ 00:06:07.900 18:15:05 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 2 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 3 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 4 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 5 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 6 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 7 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 8 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 9 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 10 00:06:07.900 18:15:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:05 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.900 18:15:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 18:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:06 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.900 18:15:06 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.900 18:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.900 18:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.900 18:15:06 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.900 18:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.900 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.468 18:15:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.468 18:15:06 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.468 18:15:06 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.468 18:15:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.468 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:09.844 ************************************ 00:06:09.844 END TEST scheduler_create_thread 00:06:09.844 ************************************ 00:06:09.844 18:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.844 00:06:09.844 real 0m1.751s 00:06:09.844 user 0m0.013s 00:06:09.844 sys 0m0.011s 00:06:09.844 18:15:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.844 18:15:07 -- common/autotest_common.sh@10 -- # set +x 00:06:09.844 18:15:07 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.844 18:15:07 -- scheduler/scheduler.sh@46 -- # killprocess 66629 00:06:09.844 18:15:07 -- common/autotest_common.sh@936 -- # '[' -z 66629 ']' 00:06:09.844 18:15:07 -- common/autotest_common.sh@940 -- # kill -0 66629 00:06:09.844 18:15:07 -- common/autotest_common.sh@941 -- # uname 00:06:09.845 18:15:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.845 18:15:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66629 00:06:09.845 killing process with pid 66629 00:06:09.845 18:15:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:09.845 18:15:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:09.845 18:15:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66629' 00:06:09.845 18:15:07 -- common/autotest_common.sh@955 -- # kill 66629 00:06:09.845 18:15:07 -- common/autotest_common.sh@960 -- # wait 66629 00:06:10.103 [2024-11-17 18:15:08.150659] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.103 00:06:10.103 real 0m2.912s 00:06:10.103 user 0m3.711s 00:06:10.103 sys 0m0.315s 00:06:10.103 18:15:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.103 ************************************ 00:06:10.103 END TEST event_scheduler 00:06:10.103 18:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:10.103 ************************************ 00:06:10.103 18:15:08 -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.103 18:15:08 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.104 18:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.104 18:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.104 18:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:10.104 ************************************ 00:06:10.104 START TEST app_repeat 00:06:10.104 ************************************ 00:06:10.104 18:15:08 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:10.104 18:15:08 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.104 18:15:08 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.104 18:15:08 -- event/event.sh@13 -- # local nbd_list 00:06:10.104 18:15:08 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.104 18:15:08 -- event/event.sh@14 -- # local bdev_list 00:06:10.104 18:15:08 -- event/event.sh@15 -- # local repeat_times=4 00:06:10.104 18:15:08 -- event/event.sh@17 -- # modprobe nbd 00:06:10.104 Process app_repeat pid: 66705 00:06:10.104 spdk_app_start Round 0 00:06:10.104 18:15:08 -- event/event.sh@19 -- # repeat_pid=66705 00:06:10.104 18:15:08 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.104 18:15:08 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.104 18:15:08 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66705' 00:06:10.104 18:15:08 -- event/event.sh@23 -- # for i in {0..2} 00:06:10.104 18:15:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.104 18:15:08 -- event/event.sh@25 -- # waitforlisten 66705 /var/tmp/spdk-nbd.sock 00:06:10.104 18:15:08 -- common/autotest_common.sh@829 -- # '[' -z 66705 ']' 00:06:10.104 18:15:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.104 18:15:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.104 18:15:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.104 18:15:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.104 18:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:10.369 [2024-11-17 18:15:08.375474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:10.369 [2024-11-17 18:15:08.375811] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66705 ] 00:06:10.369 [2024-11-17 18:15:08.509439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.369 [2024-11-17 18:15:08.543623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.369 [2024-11-17 18:15:08.543631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.728 18:15:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.728 18:15:08 -- common/autotest_common.sh@862 -- # return 0 00:06:10.728 18:15:08 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.728 Malloc0 00:06:10.728 18:15:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.294 Malloc1 00:06:11.294 18:15:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.294 18:15:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@12 -- # local i 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.295 18:15:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.553 /dev/nbd0 00:06:11.553 18:15:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.553 18:15:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.553 18:15:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:11.553 18:15:09 -- common/autotest_common.sh@867 -- # local i 00:06:11.553 18:15:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.553 18:15:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.553 18:15:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:11.553 18:15:09 -- common/autotest_common.sh@871 -- # break 00:06:11.553 18:15:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.553 18:15:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.553 18:15:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.553 1+0 records in 00:06:11.553 1+0 records out 00:06:11.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252941 s, 16.2 MB/s 00:06:11.553 18:15:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.553 18:15:09 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.553 18:15:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.553 18:15:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.553 18:15:09 -- common/autotest_common.sh@887 -- # return 0 00:06:11.553 18:15:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.553 18:15:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.553 18:15:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.811 /dev/nbd1 00:06:11.811 18:15:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.811 18:15:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.811 18:15:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.811 18:15:09 -- common/autotest_common.sh@867 -- # local i 00:06:11.811 18:15:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.811 18:15:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.811 18:15:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.811 18:15:09 -- common/autotest_common.sh@871 -- # break 00:06:11.811 18:15:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.811 18:15:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.811 18:15:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.811 1+0 records in 00:06:11.811 1+0 records out 00:06:11.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308336 s, 13.3 MB/s 00:06:11.812 18:15:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.812 18:15:09 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.812 18:15:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.812 18:15:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.812 18:15:09 -- common/autotest_common.sh@887 -- # return 0 00:06:11.812 18:15:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.812 18:15:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.812 18:15:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.812 18:15:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.812 18:15:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.070 { 00:06:12.070 "nbd_device": "/dev/nbd0", 00:06:12.070 "bdev_name": "Malloc0" 00:06:12.070 }, 00:06:12.070 { 00:06:12.070 "nbd_device": "/dev/nbd1", 00:06:12.070 "bdev_name": "Malloc1" 00:06:12.070 } 00:06:12.070 ]' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.070 { 00:06:12.070 "nbd_device": "/dev/nbd0", 00:06:12.070 "bdev_name": "Malloc0" 00:06:12.070 }, 00:06:12.070 { 00:06:12.070 "nbd_device": "/dev/nbd1", 00:06:12.070 "bdev_name": "Malloc1" 00:06:12.070 } 00:06:12.070 ]' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.070 /dev/nbd1' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.070 /dev/nbd1' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.070 256+0 records in 00:06:12.070 256+0 records out 00:06:12.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105829 s, 99.1 MB/s 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.070 256+0 records in 00:06:12.070 256+0 records out 00:06:12.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276141 s, 38.0 MB/s 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.070 256+0 records in 00:06:12.070 256+0 records out 00:06:12.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331995 s, 31.6 MB/s 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.070 18:15:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@51 -- # local i 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@41 -- # break 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.330 18:15:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@41 -- # break 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.899 18:15:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@65 -- # true 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.159 18:15:11 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.159 18:15:11 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.419 18:15:11 -- event/event.sh@35 -- # sleep 3 00:06:13.419 [2024-11-17 18:15:11.599619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.419 [2024-11-17 18:15:11.633842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.419 [2024-11-17 18:15:11.633851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.419 [2024-11-17 18:15:11.662242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.419 [2024-11-17 18:15:11.662318] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.710 spdk_app_start Round 1 00:06:16.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.710 18:15:14 -- event/event.sh@23 -- # for i in {0..2} 00:06:16.710 18:15:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:16.710 18:15:14 -- event/event.sh@25 -- # waitforlisten 66705 /var/tmp/spdk-nbd.sock 00:06:16.710 18:15:14 -- common/autotest_common.sh@829 -- # '[' -z 66705 ']' 00:06:16.710 18:15:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.710 18:15:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.710 18:15:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.710 18:15:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.710 18:15:14 -- common/autotest_common.sh@10 -- # set +x 00:06:16.710 18:15:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.710 18:15:14 -- common/autotest_common.sh@862 -- # return 0 00:06:16.710 18:15:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.969 Malloc0 00:06:16.969 18:15:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.969 Malloc1 00:06:16.969 18:15:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.969 18:15:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.228 /dev/nbd0 00:06:17.487 18:15:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.487 18:15:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.487 18:15:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:17.487 18:15:15 -- common/autotest_common.sh@867 -- # local i 00:06:17.487 18:15:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.487 18:15:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.487 18:15:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:17.487 18:15:15 -- common/autotest_common.sh@871 -- # break 00:06:17.487 18:15:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.487 18:15:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.487 18:15:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.487 1+0 records in 00:06:17.487 1+0 records out 00:06:17.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270295 s, 15.2 MB/s 00:06:17.487 18:15:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.487 18:15:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:17.487 18:15:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.487 18:15:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.487 18:15:15 -- common/autotest_common.sh@887 -- # return 0 00:06:17.487 18:15:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.487 18:15:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.487 18:15:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.747 /dev/nbd1 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.747 18:15:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.747 18:15:15 -- common/autotest_common.sh@867 -- # local i 00:06:17.747 18:15:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.747 18:15:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.747 18:15:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.747 18:15:15 -- common/autotest_common.sh@871 -- # break 00:06:17.747 18:15:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.747 18:15:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.747 18:15:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.747 1+0 records in 00:06:17.747 1+0 records out 00:06:17.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320335 s, 12.8 MB/s 00:06:17.747 18:15:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.747 18:15:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:17.747 18:15:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.747 18:15:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.747 18:15:15 -- common/autotest_common.sh@887 -- # return 0 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.747 18:15:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.006 { 00:06:18.006 "nbd_device": "/dev/nbd0", 00:06:18.006 "bdev_name": "Malloc0" 00:06:18.006 }, 00:06:18.006 { 00:06:18.006 "nbd_device": "/dev/nbd1", 00:06:18.006 "bdev_name": "Malloc1" 00:06:18.006 } 00:06:18.006 ]' 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.006 { 00:06:18.006 "nbd_device": "/dev/nbd0", 00:06:18.006 "bdev_name": "Malloc0" 00:06:18.006 }, 00:06:18.006 { 00:06:18.006 "nbd_device": "/dev/nbd1", 00:06:18.006 "bdev_name": "Malloc1" 00:06:18.006 } 00:06:18.006 ]' 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.006 /dev/nbd1' 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.006 /dev/nbd1' 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.006 18:15:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.007 256+0 records in 00:06:18.007 256+0 records out 00:06:18.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00594851 s, 176 MB/s 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.007 256+0 records in 00:06:18.007 256+0 records out 00:06:18.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251529 s, 41.7 MB/s 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.007 256+0 records in 00:06:18.007 256+0 records out 00:06:18.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288985 s, 36.3 MB/s 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@51 -- # local i 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.007 18:15:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@41 -- # break 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.266 18:15:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@41 -- # break 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.525 18:15:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.785 18:15:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.785 18:15:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.785 18:15:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@65 -- # true 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.043 18:15:17 -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.043 18:15:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.302 18:15:17 -- event/event.sh@35 -- # sleep 3 00:06:19.302 [2024-11-17 18:15:17.421513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.302 [2024-11-17 18:15:17.453019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.302 [2024-11-17 18:15:17.453031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.302 [2024-11-17 18:15:17.484725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.302 [2024-11-17 18:15:17.484767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.589 spdk_app_start Round 2 00:06:22.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.589 18:15:20 -- event/event.sh@23 -- # for i in {0..2} 00:06:22.589 18:15:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.589 18:15:20 -- event/event.sh@25 -- # waitforlisten 66705 /var/tmp/spdk-nbd.sock 00:06:22.589 18:15:20 -- common/autotest_common.sh@829 -- # '[' -z 66705 ']' 00:06:22.589 18:15:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.589 18:15:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.589 18:15:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.589 18:15:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.589 18:15:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.589 18:15:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.589 18:15:20 -- common/autotest_common.sh@862 -- # return 0 00:06:22.589 18:15:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.589 Malloc0 00:06:22.589 18:15:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.849 Malloc1 00:06:22.849 18:15:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@12 -- # local i 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.849 18:15:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.108 /dev/nbd0 00:06:23.108 18:15:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.108 18:15:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.108 18:15:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:23.108 18:15:21 -- common/autotest_common.sh@867 -- # local i 00:06:23.108 18:15:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.108 18:15:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.108 18:15:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:23.108 18:15:21 -- common/autotest_common.sh@871 -- # break 00:06:23.108 18:15:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.108 18:15:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.108 18:15:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.108 1+0 records in 00:06:23.108 1+0 records out 00:06:23.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314512 s, 13.0 MB/s 00:06:23.108 18:15:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.108 18:15:21 -- common/autotest_common.sh@884 -- # size=4096 00:06:23.108 18:15:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.108 18:15:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.108 18:15:21 -- common/autotest_common.sh@887 -- # return 0 00:06:23.108 18:15:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.108 18:15:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.108 18:15:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.367 /dev/nbd1 00:06:23.367 18:15:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.627 18:15:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:23.627 18:15:21 -- common/autotest_common.sh@867 -- # local i 00:06:23.627 18:15:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.627 18:15:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.627 18:15:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:23.627 18:15:21 -- common/autotest_common.sh@871 -- # break 00:06:23.627 18:15:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.627 18:15:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.627 18:15:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.627 1+0 records in 00:06:23.627 1+0 records out 00:06:23.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309892 s, 13.2 MB/s 00:06:23.627 18:15:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.627 18:15:21 -- common/autotest_common.sh@884 -- # size=4096 00:06:23.627 18:15:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.627 18:15:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.627 18:15:21 -- common/autotest_common.sh@887 -- # return 0 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.627 { 00:06:23.627 "nbd_device": "/dev/nbd0", 00:06:23.627 "bdev_name": "Malloc0" 00:06:23.627 }, 00:06:23.627 { 00:06:23.627 "nbd_device": "/dev/nbd1", 00:06:23.627 "bdev_name": "Malloc1" 00:06:23.627 } 00:06:23.627 ]' 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.627 { 00:06:23.627 "nbd_device": "/dev/nbd0", 00:06:23.627 "bdev_name": "Malloc0" 00:06:23.627 }, 00:06:23.627 { 00:06:23.627 "nbd_device": "/dev/nbd1", 00:06:23.627 "bdev_name": "Malloc1" 00:06:23.627 } 00:06:23.627 ]' 00:06:23.627 18:15:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.887 /dev/nbd1' 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.887 /dev/nbd1' 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.887 256+0 records in 00:06:23.887 256+0 records out 00:06:23.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646873 s, 162 MB/s 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.887 256+0 records in 00:06:23.887 256+0 records out 00:06:23.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252719 s, 41.5 MB/s 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.887 18:15:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.887 256+0 records in 00:06:23.887 256+0 records out 00:06:23.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275432 s, 38.1 MB/s 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@51 -- # local i 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.887 18:15:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@41 -- # break 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.146 18:15:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@41 -- # break 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.406 18:15:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.665 18:15:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.665 18:15:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.665 18:15:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@65 -- # true 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.666 18:15:22 -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.666 18:15:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.925 18:15:23 -- event/event.sh@35 -- # sleep 3 00:06:24.925 [2024-11-17 18:15:23.181118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.184 [2024-11-17 18:15:23.213323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.184 [2024-11-17 18:15:23.213328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.184 [2024-11-17 18:15:23.243206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.184 [2024-11-17 18:15:23.243300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.471 18:15:26 -- event/event.sh@38 -- # waitforlisten 66705 /var/tmp/spdk-nbd.sock 00:06:28.471 18:15:26 -- common/autotest_common.sh@829 -- # '[' -z 66705 ']' 00:06:28.471 18:15:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.471 18:15:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.471 18:15:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.471 18:15:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.471 18:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.471 18:15:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.471 18:15:26 -- common/autotest_common.sh@862 -- # return 0 00:06:28.471 18:15:26 -- event/event.sh@39 -- # killprocess 66705 00:06:28.471 18:15:26 -- common/autotest_common.sh@936 -- # '[' -z 66705 ']' 00:06:28.471 18:15:26 -- common/autotest_common.sh@940 -- # kill -0 66705 00:06:28.471 18:15:26 -- common/autotest_common.sh@941 -- # uname 00:06:28.471 18:15:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.471 18:15:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66705 00:06:28.471 killing process with pid 66705 00:06:28.471 18:15:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.471 18:15:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.471 18:15:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66705' 00:06:28.471 18:15:26 -- common/autotest_common.sh@955 -- # kill 66705 00:06:28.471 18:15:26 -- common/autotest_common.sh@960 -- # wait 66705 00:06:28.471 spdk_app_start is called in Round 0. 00:06:28.471 Shutdown signal received, stop current app iteration 00:06:28.471 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:28.471 spdk_app_start is called in Round 1. 00:06:28.471 Shutdown signal received, stop current app iteration 00:06:28.471 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:28.471 spdk_app_start is called in Round 2. 00:06:28.471 Shutdown signal received, stop current app iteration 00:06:28.471 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:28.471 spdk_app_start is called in Round 3. 00:06:28.471 Shutdown signal received, stop current app iteration 00:06:28.471 18:15:26 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:28.471 18:15:26 -- event/event.sh@42 -- # return 0 00:06:28.471 00:06:28.471 real 0m18.164s 00:06:28.471 user 0m41.534s 00:06:28.471 sys 0m2.476s 00:06:28.471 18:15:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.471 18:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.471 ************************************ 00:06:28.471 END TEST app_repeat 00:06:28.471 ************************************ 00:06:28.471 18:15:26 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:28.471 18:15:26 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:28.471 18:15:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.471 18:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.471 18:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.471 ************************************ 00:06:28.471 START TEST cpu_locks 00:06:28.471 ************************************ 00:06:28.471 18:15:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:28.471 * Looking for test storage... 00:06:28.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.471 18:15:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:28.471 18:15:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:28.471 18:15:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:28.471 18:15:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:28.471 18:15:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:28.471 18:15:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:28.471 18:15:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:28.471 18:15:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:28.471 18:15:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:28.471 18:15:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.471 18:15:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:28.471 18:15:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:28.471 18:15:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:28.471 18:15:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:28.471 18:15:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:28.471 18:15:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:28.471 18:15:26 -- scripts/common.sh@344 -- # : 1 00:06:28.471 18:15:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:28.471 18:15:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.471 18:15:26 -- scripts/common.sh@364 -- # decimal 1 00:06:28.471 18:15:26 -- scripts/common.sh@352 -- # local d=1 00:06:28.472 18:15:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.472 18:15:26 -- scripts/common.sh@354 -- # echo 1 00:06:28.472 18:15:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:28.472 18:15:26 -- scripts/common.sh@365 -- # decimal 2 00:06:28.472 18:15:26 -- scripts/common.sh@352 -- # local d=2 00:06:28.472 18:15:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.472 18:15:26 -- scripts/common.sh@354 -- # echo 2 00:06:28.472 18:15:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:28.472 18:15:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:28.472 18:15:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:28.472 18:15:26 -- scripts/common.sh@367 -- # return 0 00:06:28.731 18:15:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.731 18:15:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.731 --rc genhtml_branch_coverage=1 00:06:28.731 --rc genhtml_function_coverage=1 00:06:28.731 --rc genhtml_legend=1 00:06:28.731 --rc geninfo_all_blocks=1 00:06:28.731 --rc geninfo_unexecuted_blocks=1 00:06:28.731 00:06:28.731 ' 00:06:28.731 18:15:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.731 --rc genhtml_branch_coverage=1 00:06:28.731 --rc genhtml_function_coverage=1 00:06:28.731 --rc genhtml_legend=1 00:06:28.731 --rc geninfo_all_blocks=1 00:06:28.731 --rc geninfo_unexecuted_blocks=1 00:06:28.731 00:06:28.731 ' 00:06:28.731 18:15:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.731 --rc genhtml_branch_coverage=1 00:06:28.731 --rc genhtml_function_coverage=1 00:06:28.731 --rc genhtml_legend=1 00:06:28.731 --rc geninfo_all_blocks=1 00:06:28.731 --rc geninfo_unexecuted_blocks=1 00:06:28.731 00:06:28.731 ' 00:06:28.731 18:15:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.731 --rc genhtml_branch_coverage=1 00:06:28.731 --rc genhtml_function_coverage=1 00:06:28.731 --rc genhtml_legend=1 00:06:28.731 --rc geninfo_all_blocks=1 00:06:28.731 --rc geninfo_unexecuted_blocks=1 00:06:28.731 00:06:28.731 ' 00:06:28.731 18:15:26 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:28.731 18:15:26 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:28.731 18:15:26 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:28.731 18:15:26 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:28.731 18:15:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.731 18:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.731 18:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.731 ************************************ 00:06:28.731 START TEST default_locks 00:06:28.731 ************************************ 00:06:28.731 18:15:26 -- common/autotest_common.sh@1114 -- # default_locks 00:06:28.731 18:15:26 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67137 00:06:28.731 18:15:26 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.731 18:15:26 -- event/cpu_locks.sh@47 -- # waitforlisten 67137 00:06:28.731 18:15:26 -- common/autotest_common.sh@829 -- # '[' -z 67137 ']' 00:06:28.731 18:15:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.731 18:15:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.731 18:15:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.731 18:15:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.731 18:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.731 [2024-11-17 18:15:26.802456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.732 [2024-11-17 18:15:26.802567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67137 ] 00:06:28.732 [2024-11-17 18:15:26.934496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.732 [2024-11-17 18:15:26.967390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.732 [2024-11-17 18:15:26.967572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.669 18:15:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.669 18:15:27 -- common/autotest_common.sh@862 -- # return 0 00:06:29.669 18:15:27 -- event/cpu_locks.sh@49 -- # locks_exist 67137 00:06:29.669 18:15:27 -- event/cpu_locks.sh@22 -- # lslocks -p 67137 00:06:29.669 18:15:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.237 18:15:28 -- event/cpu_locks.sh@50 -- # killprocess 67137 00:06:30.237 18:15:28 -- common/autotest_common.sh@936 -- # '[' -z 67137 ']' 00:06:30.237 18:15:28 -- common/autotest_common.sh@940 -- # kill -0 67137 00:06:30.237 18:15:28 -- common/autotest_common.sh@941 -- # uname 00:06:30.237 18:15:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.237 18:15:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67137 00:06:30.237 18:15:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.237 18:15:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.237 killing process with pid 67137 00:06:30.237 18:15:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67137' 00:06:30.237 18:15:28 -- common/autotest_common.sh@955 -- # kill 67137 00:06:30.237 18:15:28 -- common/autotest_common.sh@960 -- # wait 67137 00:06:30.496 18:15:28 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67137 00:06:30.496 18:15:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.496 18:15:28 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67137 00:06:30.496 18:15:28 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.496 18:15:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.496 18:15:28 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.496 18:15:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.496 18:15:28 -- common/autotest_common.sh@653 -- # waitforlisten 67137 00:06:30.496 18:15:28 -- common/autotest_common.sh@829 -- # '[' -z 67137 ']' 00:06:30.496 18:15:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.496 18:15:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.496 18:15:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.496 18:15:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.496 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:30.496 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67137) - No such process 00:06:30.496 ERROR: process (pid: 67137) is no longer running 00:06:30.496 18:15:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.496 18:15:28 -- common/autotest_common.sh@862 -- # return 1 00:06:30.496 18:15:28 -- common/autotest_common.sh@653 -- # es=1 00:06:30.496 18:15:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.496 18:15:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.496 18:15:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.496 18:15:28 -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.496 18:15:28 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.496 18:15:28 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.496 18:15:28 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.496 00:06:30.496 real 0m1.789s 00:06:30.496 user 0m2.042s 00:06:30.496 sys 0m0.458s 00:06:30.496 18:15:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.496 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:30.496 ************************************ 00:06:30.496 END TEST default_locks 00:06:30.496 ************************************ 00:06:30.496 18:15:28 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:30.496 18:15:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.496 18:15:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.496 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:30.496 ************************************ 00:06:30.496 START TEST default_locks_via_rpc 00:06:30.496 ************************************ 00:06:30.496 18:15:28 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:30.496 18:15:28 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67183 00:06:30.496 18:15:28 -- event/cpu_locks.sh@63 -- # waitforlisten 67183 00:06:30.496 18:15:28 -- common/autotest_common.sh@829 -- # '[' -z 67183 ']' 00:06:30.496 18:15:28 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.496 18:15:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.496 18:15:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.496 18:15:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.496 18:15:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.496 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:30.496 [2024-11-17 18:15:28.646759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.496 [2024-11-17 18:15:28.646874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67183 ] 00:06:30.756 [2024-11-17 18:15:28.782373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.756 [2024-11-17 18:15:28.817062] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.756 [2024-11-17 18:15:28.817267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.693 18:15:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.693 18:15:29 -- common/autotest_common.sh@862 -- # return 0 00:06:31.693 18:15:29 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:31.693 18:15:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.693 18:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.693 18:15:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.693 18:15:29 -- event/cpu_locks.sh@67 -- # no_locks 00:06:31.693 18:15:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.693 18:15:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.693 18:15:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.693 18:15:29 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.693 18:15:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.693 18:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.693 18:15:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.693 18:15:29 -- event/cpu_locks.sh@71 -- # locks_exist 67183 00:06:31.693 18:15:29 -- event/cpu_locks.sh@22 -- # lslocks -p 67183 00:06:31.693 18:15:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.952 18:15:30 -- event/cpu_locks.sh@73 -- # killprocess 67183 00:06:31.952 18:15:30 -- common/autotest_common.sh@936 -- # '[' -z 67183 ']' 00:06:31.952 18:15:30 -- common/autotest_common.sh@940 -- # kill -0 67183 00:06:31.953 18:15:30 -- common/autotest_common.sh@941 -- # uname 00:06:31.953 18:15:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.953 18:15:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67183 00:06:31.953 18:15:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.953 18:15:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.953 killing process with pid 67183 00:06:31.953 18:15:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67183' 00:06:31.953 18:15:30 -- common/autotest_common.sh@955 -- # kill 67183 00:06:31.953 18:15:30 -- common/autotest_common.sh@960 -- # wait 67183 00:06:32.211 00:06:32.211 real 0m1.775s 00:06:32.211 user 0m2.062s 00:06:32.211 sys 0m0.447s 00:06:32.211 18:15:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.211 18:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.211 ************************************ 00:06:32.211 END TEST default_locks_via_rpc 00:06:32.211 ************************************ 00:06:32.211 18:15:30 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.211 18:15:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.211 18:15:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.211 18:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.211 ************************************ 00:06:32.211 START TEST non_locking_app_on_locked_coremask 00:06:32.211 ************************************ 00:06:32.211 18:15:30 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:32.211 18:15:30 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67234 00:06:32.211 18:15:30 -- event/cpu_locks.sh@81 -- # waitforlisten 67234 /var/tmp/spdk.sock 00:06:32.211 18:15:30 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.211 18:15:30 -- common/autotest_common.sh@829 -- # '[' -z 67234 ']' 00:06:32.211 18:15:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.211 18:15:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.211 18:15:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.211 18:15:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.211 18:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.211 [2024-11-17 18:15:30.461337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.211 [2024-11-17 18:15:30.461444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67234 ] 00:06:32.471 [2024-11-17 18:15:30.589536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.471 [2024-11-17 18:15:30.621258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.471 [2024-11-17 18:15:30.621458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.407 18:15:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.407 18:15:31 -- common/autotest_common.sh@862 -- # return 0 00:06:33.407 18:15:31 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67249 00:06:33.407 18:15:31 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.407 18:15:31 -- event/cpu_locks.sh@85 -- # waitforlisten 67249 /var/tmp/spdk2.sock 00:06:33.407 18:15:31 -- common/autotest_common.sh@829 -- # '[' -z 67249 ']' 00:06:33.407 18:15:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.407 18:15:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.407 18:15:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.407 18:15:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.407 18:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.407 [2024-11-17 18:15:31.430124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:33.407 [2024-11-17 18:15:31.430199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67249 ] 00:06:33.407 [2024-11-17 18:15:31.561442] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.407 [2024-11-17 18:15:31.561498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.407 [2024-11-17 18:15:31.628648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.407 [2024-11-17 18:15:31.628817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.344 18:15:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.344 18:15:32 -- common/autotest_common.sh@862 -- # return 0 00:06:34.344 18:15:32 -- event/cpu_locks.sh@87 -- # locks_exist 67234 00:06:34.344 18:15:32 -- event/cpu_locks.sh@22 -- # lslocks -p 67234 00:06:34.344 18:15:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.913 18:15:33 -- event/cpu_locks.sh@89 -- # killprocess 67234 00:06:34.913 18:15:33 -- common/autotest_common.sh@936 -- # '[' -z 67234 ']' 00:06:34.913 18:15:33 -- common/autotest_common.sh@940 -- # kill -0 67234 00:06:34.913 18:15:33 -- common/autotest_common.sh@941 -- # uname 00:06:34.913 18:15:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.913 18:15:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67234 00:06:34.913 18:15:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.913 18:15:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.913 killing process with pid 67234 00:06:34.913 18:15:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67234' 00:06:34.913 18:15:33 -- common/autotest_common.sh@955 -- # kill 67234 00:06:34.913 18:15:33 -- common/autotest_common.sh@960 -- # wait 67234 00:06:35.482 18:15:33 -- event/cpu_locks.sh@90 -- # killprocess 67249 00:06:35.482 18:15:33 -- common/autotest_common.sh@936 -- # '[' -z 67249 ']' 00:06:35.482 18:15:33 -- common/autotest_common.sh@940 -- # kill -0 67249 00:06:35.482 18:15:33 -- common/autotest_common.sh@941 -- # uname 00:06:35.482 18:15:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.482 18:15:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67249 00:06:35.482 18:15:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.482 18:15:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.482 killing process with pid 67249 00:06:35.482 18:15:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67249' 00:06:35.482 18:15:33 -- common/autotest_common.sh@955 -- # kill 67249 00:06:35.482 18:15:33 -- common/autotest_common.sh@960 -- # wait 67249 00:06:35.742 00:06:35.742 real 0m3.418s 00:06:35.742 user 0m3.962s 00:06:35.742 sys 0m0.859s 00:06:35.742 18:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.742 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:35.742 ************************************ 00:06:35.742 END TEST non_locking_app_on_locked_coremask 00:06:35.742 ************************************ 00:06:35.742 18:15:33 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.742 18:15:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.742 18:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.742 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:35.742 ************************************ 00:06:35.742 START TEST locking_app_on_unlocked_coremask 00:06:35.742 ************************************ 00:06:35.742 18:15:33 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:35.742 18:15:33 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67312 00:06:35.742 18:15:33 -- event/cpu_locks.sh@99 -- # waitforlisten 67312 /var/tmp/spdk.sock 00:06:35.742 18:15:33 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.742 18:15:33 -- common/autotest_common.sh@829 -- # '[' -z 67312 ']' 00:06:35.742 18:15:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.742 18:15:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.742 18:15:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.742 18:15:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.742 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:35.742 [2024-11-17 18:15:33.942964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.742 [2024-11-17 18:15:33.943075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67312 ] 00:06:36.002 [2024-11-17 18:15:34.080867] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.002 [2024-11-17 18:15:34.080922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.002 [2024-11-17 18:15:34.112089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.002 [2024-11-17 18:15:34.112270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.940 18:15:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.940 18:15:34 -- common/autotest_common.sh@862 -- # return 0 00:06:36.940 18:15:34 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67328 00:06:36.940 18:15:34 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.940 18:15:34 -- event/cpu_locks.sh@103 -- # waitforlisten 67328 /var/tmp/spdk2.sock 00:06:36.940 18:15:34 -- common/autotest_common.sh@829 -- # '[' -z 67328 ']' 00:06:36.940 18:15:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.940 18:15:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.940 18:15:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.940 18:15:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.940 18:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.940 [2024-11-17 18:15:34.989753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.940 [2024-11-17 18:15:34.989862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67328 ] 00:06:36.940 [2024-11-17 18:15:35.131593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.940 [2024-11-17 18:15:35.194833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.940 [2024-11-17 18:15:35.194995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.878 18:15:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.878 18:15:35 -- common/autotest_common.sh@862 -- # return 0 00:06:37.878 18:15:35 -- event/cpu_locks.sh@105 -- # locks_exist 67328 00:06:37.878 18:15:35 -- event/cpu_locks.sh@22 -- # lslocks -p 67328 00:06:37.878 18:15:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.817 18:15:36 -- event/cpu_locks.sh@107 -- # killprocess 67312 00:06:38.817 18:15:36 -- common/autotest_common.sh@936 -- # '[' -z 67312 ']' 00:06:38.817 18:15:36 -- common/autotest_common.sh@940 -- # kill -0 67312 00:06:38.817 18:15:36 -- common/autotest_common.sh@941 -- # uname 00:06:38.817 18:15:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.817 18:15:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67312 00:06:38.817 18:15:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.817 18:15:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.817 killing process with pid 67312 00:06:38.817 18:15:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67312' 00:06:38.817 18:15:36 -- common/autotest_common.sh@955 -- # kill 67312 00:06:38.817 18:15:36 -- common/autotest_common.sh@960 -- # wait 67312 00:06:39.076 18:15:37 -- event/cpu_locks.sh@108 -- # killprocess 67328 00:06:39.076 18:15:37 -- common/autotest_common.sh@936 -- # '[' -z 67328 ']' 00:06:39.076 18:15:37 -- common/autotest_common.sh@940 -- # kill -0 67328 00:06:39.076 18:15:37 -- common/autotest_common.sh@941 -- # uname 00:06:39.076 18:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.076 18:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67328 00:06:39.076 18:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.076 18:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.076 killing process with pid 67328 00:06:39.076 18:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67328' 00:06:39.076 18:15:37 -- common/autotest_common.sh@955 -- # kill 67328 00:06:39.076 18:15:37 -- common/autotest_common.sh@960 -- # wait 67328 00:06:39.335 00:06:39.335 real 0m3.608s 00:06:39.336 user 0m4.297s 00:06:39.336 sys 0m0.882s 00:06:39.336 18:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.336 18:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.336 ************************************ 00:06:39.336 END TEST locking_app_on_unlocked_coremask 00:06:39.336 ************************************ 00:06:39.336 18:15:37 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.336 18:15:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.336 18:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.336 18:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.336 ************************************ 00:06:39.336 START TEST locking_app_on_locked_coremask 00:06:39.336 ************************************ 00:06:39.336 18:15:37 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:39.336 18:15:37 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67384 00:06:39.336 18:15:37 -- event/cpu_locks.sh@116 -- # waitforlisten 67384 /var/tmp/spdk.sock 00:06:39.336 18:15:37 -- common/autotest_common.sh@829 -- # '[' -z 67384 ']' 00:06:39.336 18:15:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.336 18:15:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.336 18:15:37 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.336 18:15:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.336 18:15:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.336 18:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.336 [2024-11-17 18:15:37.601915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.336 [2024-11-17 18:15:37.602039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67384 ] 00:06:39.595 [2024-11-17 18:15:37.740336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.595 [2024-11-17 18:15:37.771565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.595 [2024-11-17 18:15:37.771773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.532 18:15:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.532 18:15:38 -- common/autotest_common.sh@862 -- # return 0 00:06:40.532 18:15:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67400 00:06:40.532 18:15:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67400 /var/tmp/spdk2.sock 00:06:40.532 18:15:38 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.532 18:15:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.532 18:15:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67400 /var/tmp/spdk2.sock 00:06:40.532 18:15:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.532 18:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.532 18:15:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.532 18:15:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.532 18:15:38 -- common/autotest_common.sh@653 -- # waitforlisten 67400 /var/tmp/spdk2.sock 00:06:40.532 18:15:38 -- common/autotest_common.sh@829 -- # '[' -z 67400 ']' 00:06:40.532 18:15:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.532 18:15:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.532 18:15:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.532 18:15:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.532 18:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:40.532 [2024-11-17 18:15:38.613033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.533 [2024-11-17 18:15:38.613160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67400 ] 00:06:40.533 [2024-11-17 18:15:38.748423] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67384 has claimed it. 00:06:40.533 [2024-11-17 18:15:38.748507] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.116 ERROR: process (pid: 67400) is no longer running 00:06:41.116 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67400) - No such process 00:06:41.116 18:15:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.116 18:15:39 -- common/autotest_common.sh@862 -- # return 1 00:06:41.116 18:15:39 -- common/autotest_common.sh@653 -- # es=1 00:06:41.116 18:15:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.116 18:15:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.116 18:15:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.116 18:15:39 -- event/cpu_locks.sh@122 -- # locks_exist 67384 00:06:41.116 18:15:39 -- event/cpu_locks.sh@22 -- # lslocks -p 67384 00:06:41.116 18:15:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.377 18:15:39 -- event/cpu_locks.sh@124 -- # killprocess 67384 00:06:41.377 18:15:39 -- common/autotest_common.sh@936 -- # '[' -z 67384 ']' 00:06:41.377 18:15:39 -- common/autotest_common.sh@940 -- # kill -0 67384 00:06:41.377 18:15:39 -- common/autotest_common.sh@941 -- # uname 00:06:41.377 18:15:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.377 18:15:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67384 00:06:41.377 18:15:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.377 18:15:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.377 killing process with pid 67384 00:06:41.377 18:15:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67384' 00:06:41.377 18:15:39 -- common/autotest_common.sh@955 -- # kill 67384 00:06:41.377 18:15:39 -- common/autotest_common.sh@960 -- # wait 67384 00:06:41.636 00:06:41.636 real 0m2.313s 00:06:41.636 user 0m2.817s 00:06:41.636 sys 0m0.440s 00:06:41.636 18:15:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.636 18:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.636 ************************************ 00:06:41.636 END TEST locking_app_on_locked_coremask 00:06:41.636 ************************************ 00:06:41.636 18:15:39 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.636 18:15:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.636 18:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.636 18:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.636 ************************************ 00:06:41.636 START TEST locking_overlapped_coremask 00:06:41.636 ************************************ 00:06:41.636 18:15:39 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:41.895 18:15:39 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67445 00:06:41.895 18:15:39 -- event/cpu_locks.sh@133 -- # waitforlisten 67445 /var/tmp/spdk.sock 00:06:41.895 18:15:39 -- common/autotest_common.sh@829 -- # '[' -z 67445 ']' 00:06:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.895 18:15:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.895 18:15:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.895 18:15:39 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.895 18:15:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.895 18:15:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.895 18:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:41.895 [2024-11-17 18:15:39.962019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.895 [2024-11-17 18:15:39.962120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67445 ] 00:06:41.895 [2024-11-17 18:15:40.100994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.895 [2024-11-17 18:15:40.141154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.895 [2024-11-17 18:15:40.141502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.895 [2024-11-17 18:15:40.141689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.895 [2024-11-17 18:15:40.141695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.852 18:15:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.852 18:15:40 -- common/autotest_common.sh@862 -- # return 0 00:06:42.852 18:15:40 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.852 18:15:40 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67463 00:06:42.852 18:15:40 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67463 /var/tmp/spdk2.sock 00:06:42.852 18:15:40 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.852 18:15:40 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67463 /var/tmp/spdk2.sock 00:06:42.852 18:15:40 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.852 18:15:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.852 18:15:40 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.852 18:15:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.852 18:15:40 -- common/autotest_common.sh@653 -- # waitforlisten 67463 /var/tmp/spdk2.sock 00:06:42.852 18:15:40 -- common/autotest_common.sh@829 -- # '[' -z 67463 ']' 00:06:42.852 18:15:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.852 18:15:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.852 18:15:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.852 18:15:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.852 18:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.852 [2024-11-17 18:15:40.892506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.852 [2024-11-17 18:15:40.892615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67463 ] 00:06:42.852 [2024-11-17 18:15:41.031307] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67445 has claimed it. 00:06:42.852 [2024-11-17 18:15:41.034447] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.485 ERROR: process (pid: 67463) is no longer running 00:06:43.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67463) - No such process 00:06:43.486 18:15:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.486 18:15:41 -- common/autotest_common.sh@862 -- # return 1 00:06:43.486 18:15:41 -- common/autotest_common.sh@653 -- # es=1 00:06:43.486 18:15:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.486 18:15:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.486 18:15:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.486 18:15:41 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.486 18:15:41 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.486 18:15:41 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.486 18:15:41 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.486 18:15:41 -- event/cpu_locks.sh@141 -- # killprocess 67445 00:06:43.486 18:15:41 -- common/autotest_common.sh@936 -- # '[' -z 67445 ']' 00:06:43.486 18:15:41 -- common/autotest_common.sh@940 -- # kill -0 67445 00:06:43.486 18:15:41 -- common/autotest_common.sh@941 -- # uname 00:06:43.486 18:15:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.486 18:15:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67445 00:06:43.486 18:15:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.486 killing process with pid 67445 00:06:43.486 18:15:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.486 18:15:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67445' 00:06:43.486 18:15:41 -- common/autotest_common.sh@955 -- # kill 67445 00:06:43.486 18:15:41 -- common/autotest_common.sh@960 -- # wait 67445 00:06:43.744 00:06:43.744 real 0m1.975s 00:06:43.744 user 0m5.689s 00:06:43.744 sys 0m0.297s 00:06:43.744 18:15:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.744 18:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:43.744 ************************************ 00:06:43.744 END TEST locking_overlapped_coremask 00:06:43.744 ************************************ 00:06:43.744 18:15:41 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.744 18:15:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.744 18:15:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.744 18:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:43.744 ************************************ 00:06:43.744 START TEST locking_overlapped_coremask_via_rpc 00:06:43.744 ************************************ 00:06:43.744 18:15:41 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:43.744 18:15:41 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67503 00:06:43.744 18:15:41 -- event/cpu_locks.sh@149 -- # waitforlisten 67503 /var/tmp/spdk.sock 00:06:43.744 18:15:41 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.745 18:15:41 -- common/autotest_common.sh@829 -- # '[' -z 67503 ']' 00:06:43.745 18:15:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.745 18:15:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.745 18:15:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.745 18:15:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.745 18:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:43.745 [2024-11-17 18:15:41.977093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.745 [2024-11-17 18:15:41.977195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67503 ] 00:06:44.004 [2024-11-17 18:15:42.107366] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.004 [2024-11-17 18:15:42.107421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.004 [2024-11-17 18:15:42.138838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.004 [2024-11-17 18:15:42.139146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.004 [2024-11-17 18:15:42.139317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.004 [2024-11-17 18:15:42.139319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.940 18:15:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.940 18:15:42 -- common/autotest_common.sh@862 -- # return 0 00:06:44.940 18:15:42 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67527 00:06:44.940 18:15:42 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.940 18:15:42 -- event/cpu_locks.sh@153 -- # waitforlisten 67527 /var/tmp/spdk2.sock 00:06:44.940 18:15:42 -- common/autotest_common.sh@829 -- # '[' -z 67527 ']' 00:06:44.940 18:15:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.940 18:15:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.940 18:15:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.940 18:15:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.940 18:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.940 [2024-11-17 18:15:43.046680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.940 [2024-11-17 18:15:43.046809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67527 ] 00:06:44.940 [2024-11-17 18:15:43.190373] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.940 [2024-11-17 18:15:43.190410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.200 [2024-11-17 18:15:43.260185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.200 [2024-11-17 18:15:43.260505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.200 [2024-11-17 18:15:43.264355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.200 [2024-11-17 18:15:43.264359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.767 18:15:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.767 18:15:43 -- common/autotest_common.sh@862 -- # return 0 00:06:45.767 18:15:43 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.767 18:15:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.767 18:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.767 18:15:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.768 18:15:43 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.768 18:15:43 -- common/autotest_common.sh@650 -- # local es=0 00:06:45.768 18:15:43 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.768 18:15:43 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:45.768 18:15:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.768 18:15:43 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:45.768 18:15:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.768 18:15:43 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.768 18:15:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.768 18:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.768 [2024-11-17 18:15:43.977439] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67503 has claimed it. 00:06:45.768 request: 00:06:45.768 { 00:06:45.768 "method": "framework_enable_cpumask_locks", 00:06:45.768 "req_id": 1 00:06:45.768 } 00:06:45.768 Got JSON-RPC error response 00:06:45.768 response: 00:06:45.768 { 00:06:45.768 "code": -32603, 00:06:45.768 "message": "Failed to claim CPU core: 2" 00:06:45.768 } 00:06:45.768 18:15:43 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.768 18:15:43 -- common/autotest_common.sh@653 -- # es=1 00:06:45.768 18:15:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.768 18:15:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.768 18:15:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.768 18:15:43 -- event/cpu_locks.sh@158 -- # waitforlisten 67503 /var/tmp/spdk.sock 00:06:45.768 18:15:43 -- common/autotest_common.sh@829 -- # '[' -z 67503 ']' 00:06:45.768 18:15:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.768 18:15:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.768 18:15:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.768 18:15:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.768 18:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.026 18:15:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.026 18:15:44 -- common/autotest_common.sh@862 -- # return 0 00:06:46.026 18:15:44 -- event/cpu_locks.sh@159 -- # waitforlisten 67527 /var/tmp/spdk2.sock 00:06:46.026 18:15:44 -- common/autotest_common.sh@829 -- # '[' -z 67527 ']' 00:06:46.026 18:15:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.026 18:15:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.026 18:15:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.026 18:15:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.026 18:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.285 18:15:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.285 18:15:44 -- common/autotest_common.sh@862 -- # return 0 00:06:46.285 18:15:44 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.285 18:15:44 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.285 18:15:44 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.285 18:15:44 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.285 00:06:46.285 real 0m2.620s 00:06:46.285 user 0m1.377s 00:06:46.285 sys 0m0.169s 00:06:46.285 18:15:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.285 18:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.285 ************************************ 00:06:46.285 END TEST locking_overlapped_coremask_via_rpc 00:06:46.285 ************************************ 00:06:46.544 18:15:44 -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.544 18:15:44 -- event/cpu_locks.sh@15 -- # [[ -z 67503 ]] 00:06:46.544 18:15:44 -- event/cpu_locks.sh@15 -- # killprocess 67503 00:06:46.544 18:15:44 -- common/autotest_common.sh@936 -- # '[' -z 67503 ']' 00:06:46.544 18:15:44 -- common/autotest_common.sh@940 -- # kill -0 67503 00:06:46.544 18:15:44 -- common/autotest_common.sh@941 -- # uname 00:06:46.544 18:15:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.544 18:15:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67503 00:06:46.544 18:15:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.544 18:15:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.544 killing process with pid 67503 00:06:46.544 18:15:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67503' 00:06:46.544 18:15:44 -- common/autotest_common.sh@955 -- # kill 67503 00:06:46.544 18:15:44 -- common/autotest_common.sh@960 -- # wait 67503 00:06:46.803 18:15:44 -- event/cpu_locks.sh@16 -- # [[ -z 67527 ]] 00:06:46.803 18:15:44 -- event/cpu_locks.sh@16 -- # killprocess 67527 00:06:46.803 18:15:44 -- common/autotest_common.sh@936 -- # '[' -z 67527 ']' 00:06:46.803 18:15:44 -- common/autotest_common.sh@940 -- # kill -0 67527 00:06:46.803 18:15:44 -- common/autotest_common.sh@941 -- # uname 00:06:46.803 18:15:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.803 18:15:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67527 00:06:46.803 18:15:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:46.803 18:15:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:46.803 killing process with pid 67527 00:06:46.803 18:15:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67527' 00:06:46.803 18:15:44 -- common/autotest_common.sh@955 -- # kill 67527 00:06:46.803 18:15:44 -- common/autotest_common.sh@960 -- # wait 67527 00:06:47.062 18:15:45 -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.062 18:15:45 -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.062 18:15:45 -- event/cpu_locks.sh@15 -- # [[ -z 67503 ]] 00:06:47.062 18:15:45 -- event/cpu_locks.sh@15 -- # killprocess 67503 00:06:47.062 18:15:45 -- common/autotest_common.sh@936 -- # '[' -z 67503 ']' 00:06:47.062 18:15:45 -- common/autotest_common.sh@940 -- # kill -0 67503 00:06:47.062 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67503) - No such process 00:06:47.062 Process with pid 67503 is not found 00:06:47.062 18:15:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67503 is not found' 00:06:47.062 18:15:45 -- event/cpu_locks.sh@16 -- # [[ -z 67527 ]] 00:06:47.062 18:15:45 -- event/cpu_locks.sh@16 -- # killprocess 67527 00:06:47.062 18:15:45 -- common/autotest_common.sh@936 -- # '[' -z 67527 ']' 00:06:47.062 18:15:45 -- common/autotest_common.sh@940 -- # kill -0 67527 00:06:47.062 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67527) - No such process 00:06:47.062 18:15:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67527 is not found' 00:06:47.062 Process with pid 67527 is not found 00:06:47.062 18:15:45 -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.062 00:06:47.062 real 0m18.547s 00:06:47.062 user 0m34.088s 00:06:47.062 sys 0m4.207s 00:06:47.062 18:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.062 18:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 ************************************ 00:06:47.062 END TEST cpu_locks 00:06:47.062 ************************************ 00:06:47.062 00:06:47.062 real 0m43.870s 00:06:47.062 user 1m25.821s 00:06:47.062 sys 0m7.393s 00:06:47.062 18:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.062 18:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 ************************************ 00:06:47.062 END TEST event 00:06:47.062 ************************************ 00:06:47.062 18:15:45 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.062 18:15:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.062 18:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.062 18:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 ************************************ 00:06:47.062 START TEST thread 00:06:47.062 ************************************ 00:06:47.062 18:15:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.062 * Looking for test storage... 00:06:47.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:47.062 18:15:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.062 18:15:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.062 18:15:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.322 18:15:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.322 18:15:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.322 18:15:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.322 18:15:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.322 18:15:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.322 18:15:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.322 18:15:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.322 18:15:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.322 18:15:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.322 18:15:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.322 18:15:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.322 18:15:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.322 18:15:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.322 18:15:45 -- scripts/common.sh@344 -- # : 1 00:06:47.322 18:15:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.322 18:15:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.322 18:15:45 -- scripts/common.sh@364 -- # decimal 1 00:06:47.322 18:15:45 -- scripts/common.sh@352 -- # local d=1 00:06:47.322 18:15:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.322 18:15:45 -- scripts/common.sh@354 -- # echo 1 00:06:47.322 18:15:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.322 18:15:45 -- scripts/common.sh@365 -- # decimal 2 00:06:47.322 18:15:45 -- scripts/common.sh@352 -- # local d=2 00:06:47.322 18:15:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.322 18:15:45 -- scripts/common.sh@354 -- # echo 2 00:06:47.322 18:15:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.322 18:15:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.322 18:15:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.322 18:15:45 -- scripts/common.sh@367 -- # return 0 00:06:47.322 18:15:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.322 18:15:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.322 --rc genhtml_branch_coverage=1 00:06:47.322 --rc genhtml_function_coverage=1 00:06:47.322 --rc genhtml_legend=1 00:06:47.322 --rc geninfo_all_blocks=1 00:06:47.322 --rc geninfo_unexecuted_blocks=1 00:06:47.322 00:06:47.322 ' 00:06:47.322 18:15:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.322 --rc genhtml_branch_coverage=1 00:06:47.322 --rc genhtml_function_coverage=1 00:06:47.322 --rc genhtml_legend=1 00:06:47.322 --rc geninfo_all_blocks=1 00:06:47.322 --rc geninfo_unexecuted_blocks=1 00:06:47.322 00:06:47.322 ' 00:06:47.322 18:15:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.322 --rc genhtml_branch_coverage=1 00:06:47.322 --rc genhtml_function_coverage=1 00:06:47.322 --rc genhtml_legend=1 00:06:47.322 --rc geninfo_all_blocks=1 00:06:47.322 --rc geninfo_unexecuted_blocks=1 00:06:47.322 00:06:47.322 ' 00:06:47.322 18:15:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.322 --rc genhtml_branch_coverage=1 00:06:47.322 --rc genhtml_function_coverage=1 00:06:47.322 --rc genhtml_legend=1 00:06:47.322 --rc geninfo_all_blocks=1 00:06:47.322 --rc geninfo_unexecuted_blocks=1 00:06:47.322 00:06:47.322 ' 00:06:47.322 18:15:45 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.322 18:15:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:47.322 18:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.322 18:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 ************************************ 00:06:47.322 START TEST thread_poller_perf 00:06:47.322 ************************************ 00:06:47.322 18:15:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.322 [2024-11-17 18:15:45.375456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.322 [2024-11-17 18:15:45.376133] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67651 ] 00:06:47.322 [2024-11-17 18:15:45.498397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.322 [2024-11-17 18:15:45.527753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.322 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.700 [2024-11-17T18:15:46.967Z] ====================================== 00:06:48.700 [2024-11-17T18:15:46.967Z] busy:2210639602 (cyc) 00:06:48.700 [2024-11-17T18:15:46.967Z] total_run_count: 353000 00:06:48.700 [2024-11-17T18:15:46.967Z] tsc_hz: 2200000000 (cyc) 00:06:48.700 [2024-11-17T18:15:46.967Z] ====================================== 00:06:48.700 [2024-11-17T18:15:46.967Z] poller_cost: 6262 (cyc), 2846 (nsec) 00:06:48.700 00:06:48.700 real 0m1.222s 00:06:48.700 user 0m1.084s 00:06:48.700 sys 0m0.031s 00:06:48.700 18:15:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.700 18:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.700 ************************************ 00:06:48.700 END TEST thread_poller_perf 00:06:48.700 ************************************ 00:06:48.700 18:15:46 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.700 18:15:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:48.700 18:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.700 18:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.700 ************************************ 00:06:48.700 START TEST thread_poller_perf 00:06:48.700 ************************************ 00:06:48.700 18:15:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.700 [2024-11-17 18:15:46.647475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.700 [2024-11-17 18:15:46.647592] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67681 ] 00:06:48.700 [2024-11-17 18:15:46.781615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.700 [2024-11-17 18:15:46.810601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.700 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.636 [2024-11-17T18:15:47.903Z] ====================================== 00:06:49.636 [2024-11-17T18:15:47.903Z] busy:2202831340 (cyc) 00:06:49.636 [2024-11-17T18:15:47.903Z] total_run_count: 4947000 00:06:49.636 [2024-11-17T18:15:47.903Z] tsc_hz: 2200000000 (cyc) 00:06:49.636 [2024-11-17T18:15:47.903Z] ====================================== 00:06:49.636 [2024-11-17T18:15:47.903Z] poller_cost: 445 (cyc), 202 (nsec) 00:06:49.636 00:06:49.636 real 0m1.225s 00:06:49.636 user 0m1.082s 00:06:49.636 sys 0m0.037s 00:06:49.636 18:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.636 ************************************ 00:06:49.636 END TEST thread_poller_perf 00:06:49.636 ************************************ 00:06:49.636 18:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.636 18:15:47 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.636 ************************************ 00:06:49.636 END TEST thread 00:06:49.636 ************************************ 00:06:49.636 00:06:49.636 real 0m2.703s 00:06:49.636 user 0m2.292s 00:06:49.636 sys 0m0.199s 00:06:49.636 18:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.636 18:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.895 18:15:47 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:49.895 18:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.895 18:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.895 18:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.895 ************************************ 00:06:49.895 START TEST accel 00:06:49.895 ************************************ 00:06:49.895 18:15:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:49.895 * Looking for test storage... 00:06:49.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:49.895 18:15:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.895 18:15:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.895 18:15:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.895 18:15:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.895 18:15:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.895 18:15:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.895 18:15:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.895 18:15:48 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.895 18:15:48 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.895 18:15:48 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.895 18:15:48 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.895 18:15:48 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.895 18:15:48 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.895 18:15:48 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.895 18:15:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.895 18:15:48 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.895 18:15:48 -- scripts/common.sh@344 -- # : 1 00:06:49.895 18:15:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.895 18:15:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.895 18:15:48 -- scripts/common.sh@364 -- # decimal 1 00:06:49.895 18:15:48 -- scripts/common.sh@352 -- # local d=1 00:06:49.895 18:15:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.895 18:15:48 -- scripts/common.sh@354 -- # echo 1 00:06:49.895 18:15:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.895 18:15:48 -- scripts/common.sh@365 -- # decimal 2 00:06:49.895 18:15:48 -- scripts/common.sh@352 -- # local d=2 00:06:49.895 18:15:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.895 18:15:48 -- scripts/common.sh@354 -- # echo 2 00:06:49.895 18:15:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.895 18:15:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.895 18:15:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.895 18:15:48 -- scripts/common.sh@367 -- # return 0 00:06:49.895 18:15:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.895 18:15:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.895 --rc genhtml_branch_coverage=1 00:06:49.895 --rc genhtml_function_coverage=1 00:06:49.895 --rc genhtml_legend=1 00:06:49.895 --rc geninfo_all_blocks=1 00:06:49.895 --rc geninfo_unexecuted_blocks=1 00:06:49.895 00:06:49.895 ' 00:06:49.895 18:15:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.895 --rc genhtml_branch_coverage=1 00:06:49.895 --rc genhtml_function_coverage=1 00:06:49.895 --rc genhtml_legend=1 00:06:49.895 --rc geninfo_all_blocks=1 00:06:49.895 --rc geninfo_unexecuted_blocks=1 00:06:49.895 00:06:49.895 ' 00:06:49.895 18:15:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.895 --rc genhtml_branch_coverage=1 00:06:49.895 --rc genhtml_function_coverage=1 00:06:49.895 --rc genhtml_legend=1 00:06:49.895 --rc geninfo_all_blocks=1 00:06:49.895 --rc geninfo_unexecuted_blocks=1 00:06:49.895 00:06:49.895 ' 00:06:49.895 18:15:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.895 --rc genhtml_branch_coverage=1 00:06:49.895 --rc genhtml_function_coverage=1 00:06:49.895 --rc genhtml_legend=1 00:06:49.895 --rc geninfo_all_blocks=1 00:06:49.895 --rc geninfo_unexecuted_blocks=1 00:06:49.895 00:06:49.895 ' 00:06:49.895 18:15:48 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:49.895 18:15:48 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:49.895 18:15:48 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.895 18:15:48 -- accel/accel.sh@59 -- # spdk_tgt_pid=67768 00:06:49.895 18:15:48 -- accel/accel.sh@60 -- # waitforlisten 67768 00:06:49.895 18:15:48 -- common/autotest_common.sh@829 -- # '[' -z 67768 ']' 00:06:49.895 18:15:48 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:49.895 18:15:48 -- accel/accel.sh@58 -- # build_accel_config 00:06:49.895 18:15:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.895 18:15:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.895 18:15:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.895 18:15:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.895 18:15:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.895 18:15:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.895 18:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:49.895 18:15:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.895 18:15:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.895 18:15:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.895 18:15:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.895 18:15:48 -- accel/accel.sh@42 -- # jq -r . 00:06:50.155 [2024-11-17 18:15:48.189924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.155 [2024-11-17 18:15:48.190218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67768 ] 00:06:50.155 [2024-11-17 18:15:48.327767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.155 [2024-11-17 18:15:48.357598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.155 [2024-11-17 18:15:48.357774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.099 18:15:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.099 18:15:49 -- common/autotest_common.sh@862 -- # return 0 00:06:51.099 18:15:49 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:51.099 18:15:49 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:51.099 18:15:49 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:51.099 18:15:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.099 18:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.099 18:15:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.099 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.099 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.099 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.099 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.099 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.099 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.099 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.099 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.099 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.099 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.099 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.099 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # IFS== 00:06:51.100 18:15:49 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.100 18:15:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.100 18:15:49 -- accel/accel.sh@67 -- # killprocess 67768 00:06:51.100 18:15:49 -- common/autotest_common.sh@936 -- # '[' -z 67768 ']' 00:06:51.100 18:15:49 -- common/autotest_common.sh@940 -- # kill -0 67768 00:06:51.100 18:15:49 -- common/autotest_common.sh@941 -- # uname 00:06:51.100 18:15:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.100 18:15:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67768 00:06:51.100 18:15:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.100 18:15:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.100 18:15:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67768' 00:06:51.100 killing process with pid 67768 00:06:51.100 18:15:49 -- common/autotest_common.sh@955 -- # kill 67768 00:06:51.100 18:15:49 -- common/autotest_common.sh@960 -- # wait 67768 00:06:51.361 18:15:49 -- accel/accel.sh@68 -- # trap - ERR 00:06:51.361 18:15:49 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:51.361 18:15:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:51.361 18:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.361 18:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.361 18:15:49 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:51.361 18:15:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:51.361 18:15:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.361 18:15:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.361 18:15:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.361 18:15:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.361 18:15:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.361 18:15:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.361 18:15:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.361 18:15:49 -- accel/accel.sh@42 -- # jq -r . 00:06:51.361 18:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.361 18:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.361 18:15:49 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:51.361 18:15:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:51.361 18:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.361 18:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.361 ************************************ 00:06:51.361 START TEST accel_missing_filename 00:06:51.361 ************************************ 00:06:51.361 18:15:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:51.361 18:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:06:51.361 18:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:51.361 18:15:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:51.361 18:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.361 18:15:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:51.361 18:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.361 18:15:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:51.361 18:15:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:51.361 18:15:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.361 18:15:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.361 18:15:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.361 18:15:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.361 18:15:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.361 18:15:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.361 18:15:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.361 18:15:49 -- accel/accel.sh@42 -- # jq -r . 00:06:51.361 [2024-11-17 18:15:49.609338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.361 [2024-11-17 18:15:49.609415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67814 ] 00:06:51.620 [2024-11-17 18:15:49.739082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.620 [2024-11-17 18:15:49.770034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.620 [2024-11-17 18:15:49.797536] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.620 [2024-11-17 18:15:49.833884] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:51.620 A filename is required. 00:06:51.620 18:15:49 -- common/autotest_common.sh@653 -- # es=234 00:06:51.620 18:15:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.620 18:15:49 -- common/autotest_common.sh@662 -- # es=106 00:06:51.620 ************************************ 00:06:51.620 END TEST accel_missing_filename 00:06:51.620 ************************************ 00:06:51.620 18:15:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:51.620 18:15:49 -- common/autotest_common.sh@670 -- # es=1 00:06:51.620 18:15:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.620 00:06:51.620 real 0m0.294s 00:06:51.620 user 0m0.161s 00:06:51.620 sys 0m0.068s 00:06:51.620 18:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.620 18:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.880 18:15:49 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.880 18:15:49 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:51.880 18:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.880 18:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.880 ************************************ 00:06:51.880 START TEST accel_compress_verify 00:06:51.880 ************************************ 00:06:51.880 18:15:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.880 18:15:49 -- common/autotest_common.sh@650 -- # local es=0 00:06:51.880 18:15:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.880 18:15:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:51.880 18:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.880 18:15:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:51.880 18:15:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.880 18:15:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.880 18:15:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.880 18:15:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.880 18:15:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.880 18:15:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.880 18:15:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.880 18:15:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.880 18:15:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.880 18:15:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.880 18:15:49 -- accel/accel.sh@42 -- # jq -r . 00:06:51.880 [2024-11-17 18:15:49.952451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.880 [2024-11-17 18:15:49.952520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67833 ] 00:06:51.880 [2024-11-17 18:15:50.080935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.880 [2024-11-17 18:15:50.115714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.139 [2024-11-17 18:15:50.147765] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.139 [2024-11-17 18:15:50.185615] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:52.139 00:06:52.139 Compression does not support the verify option, aborting. 00:06:52.139 18:15:50 -- common/autotest_common.sh@653 -- # es=161 00:06:52.139 18:15:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.139 18:15:50 -- common/autotest_common.sh@662 -- # es=33 00:06:52.139 18:15:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.139 18:15:50 -- common/autotest_common.sh@670 -- # es=1 00:06:52.139 ************************************ 00:06:52.139 END TEST accel_compress_verify 00:06:52.139 ************************************ 00:06:52.139 18:15:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.139 00:06:52.139 real 0m0.308s 00:06:52.139 user 0m0.176s 00:06:52.139 sys 0m0.069s 00:06:52.139 18:15:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.139 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.139 18:15:50 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:52.139 18:15:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.139 18:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.139 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.139 ************************************ 00:06:52.139 START TEST accel_wrong_workload 00:06:52.139 ************************************ 00:06:52.139 18:15:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:52.139 18:15:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.139 18:15:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:52.139 18:15:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.139 18:15:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.139 18:15:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.139 18:15:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.139 18:15:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:52.139 18:15:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:52.139 18:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.139 18:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.139 18:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.139 18:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.139 18:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.139 18:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.139 18:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.139 18:15:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.139 Unsupported workload type: foobar 00:06:52.139 [2024-11-17 18:15:50.308210] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:52.139 accel_perf options: 00:06:52.139 [-h help message] 00:06:52.139 [-q queue depth per core] 00:06:52.139 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.139 [-T number of threads per core 00:06:52.139 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.139 [-t time in seconds] 00:06:52.139 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.139 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.139 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.139 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.139 [-S for crc32c workload, use this seed value (default 0) 00:06:52.139 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.139 [-f for fill workload, use this BYTE value (default 255) 00:06:52.139 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.139 [-y verify result if this switch is on] 00:06:52.139 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.139 Can be used to spread operations across a wider range of memory. 00:06:52.139 18:15:50 -- common/autotest_common.sh@653 -- # es=1 00:06:52.139 18:15:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.139 18:15:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.139 18:15:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.139 00:06:52.139 real 0m0.028s 00:06:52.139 user 0m0.015s 00:06:52.139 sys 0m0.011s 00:06:52.139 18:15:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.139 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.139 ************************************ 00:06:52.139 END TEST accel_wrong_workload 00:06:52.139 ************************************ 00:06:52.139 18:15:50 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.140 18:15:50 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:52.140 18:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.140 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.140 ************************************ 00:06:52.140 START TEST accel_negative_buffers 00:06:52.140 ************************************ 00:06:52.140 18:15:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.140 18:15:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.140 18:15:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:52.140 18:15:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.140 18:15:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.140 18:15:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.140 18:15:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.140 18:15:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:52.140 18:15:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:52.140 18:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.140 18:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.140 18:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.140 18:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.140 18:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.140 18:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.140 18:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.140 18:15:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.140 -x option must be non-negative. 00:06:52.140 [2024-11-17 18:15:50.388617] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:52.140 accel_perf options: 00:06:52.140 [-h help message] 00:06:52.140 [-q queue depth per core] 00:06:52.140 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.140 [-T number of threads per core 00:06:52.140 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.140 [-t time in seconds] 00:06:52.140 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.140 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.140 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.140 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.140 [-S for crc32c workload, use this seed value (default 0) 00:06:52.140 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.140 [-f for fill workload, use this BYTE value (default 255) 00:06:52.140 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.140 [-y verify result if this switch is on] 00:06:52.140 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.140 Can be used to spread operations across a wider range of memory. 00:06:52.140 18:15:50 -- common/autotest_common.sh@653 -- # es=1 00:06:52.140 18:15:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.140 18:15:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.140 18:15:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.140 00:06:52.140 real 0m0.029s 00:06:52.140 user 0m0.017s 00:06:52.140 sys 0m0.011s 00:06:52.140 18:15:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.140 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.140 ************************************ 00:06:52.140 END TEST accel_negative_buffers 00:06:52.140 ************************************ 00:06:52.399 18:15:50 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:52.399 18:15:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.399 18:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.399 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.399 ************************************ 00:06:52.399 START TEST accel_crc32c 00:06:52.399 ************************************ 00:06:52.399 18:15:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:52.399 18:15:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.399 18:15:50 -- accel/accel.sh@17 -- # local accel_module 00:06:52.399 18:15:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.399 18:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.399 18:15:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.399 18:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.399 18:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.399 18:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.399 18:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.399 18:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.399 18:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.399 18:15:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.399 [2024-11-17 18:15:50.469738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.399 [2024-11-17 18:15:50.469820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67897 ] 00:06:52.399 [2024-11-17 18:15:50.604675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.399 [2024-11-17 18:15:50.634747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.779 18:15:51 -- accel/accel.sh@18 -- # out=' 00:06:53.779 SPDK Configuration: 00:06:53.779 Core mask: 0x1 00:06:53.779 00:06:53.779 Accel Perf Configuration: 00:06:53.779 Workload Type: crc32c 00:06:53.779 CRC-32C seed: 32 00:06:53.779 Transfer size: 4096 bytes 00:06:53.779 Vector count 1 00:06:53.779 Module: software 00:06:53.779 Queue depth: 32 00:06:53.779 Allocate depth: 32 00:06:53.779 # threads/core: 1 00:06:53.779 Run time: 1 seconds 00:06:53.779 Verify: Yes 00:06:53.779 00:06:53.779 Running for 1 seconds... 00:06:53.779 00:06:53.779 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.779 ------------------------------------------------------------------------------------ 00:06:53.779 0,0 533536/s 2084 MiB/s 0 0 00:06:53.779 ==================================================================================== 00:06:53.779 Total 533536/s 2084 MiB/s 0 0' 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.779 18:15:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.779 18:15:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.779 18:15:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.779 18:15:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.779 18:15:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.779 18:15:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.779 18:15:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.779 18:15:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.779 18:15:51 -- accel/accel.sh@42 -- # jq -r . 00:06:53.779 [2024-11-17 18:15:51.777616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.779 [2024-11-17 18:15:51.777710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67911 ] 00:06:53.779 [2024-11-17 18:15:51.913086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.779 [2024-11-17 18:15:51.942985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val=0x1 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val=crc32c 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val=32 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val=software 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val=32 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.779 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.779 18:15:51 -- accel/accel.sh@21 -- # val=32 00:06:53.779 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.780 18:15:51 -- accel/accel.sh@21 -- # val=1 00:06:53.780 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.780 18:15:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.780 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.780 18:15:51 -- accel/accel.sh@21 -- # val=Yes 00:06:53.780 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.780 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.780 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.780 18:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.780 18:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.780 18:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.171 18:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.171 18:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.171 18:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.171 18:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.171 18:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.171 18:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.171 18:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.171 18:15:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.171 18:15:53 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:55.171 18:15:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.171 00:06:55.171 real 0m2.614s 00:06:55.171 user 0m2.268s 00:06:55.171 sys 0m0.142s 00:06:55.171 18:15:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.171 18:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.171 ************************************ 00:06:55.171 END TEST accel_crc32c 00:06:55.171 ************************************ 00:06:55.171 18:15:53 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.171 18:15:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.171 18:15:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.171 18:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.171 ************************************ 00:06:55.171 START TEST accel_crc32c_C2 00:06:55.171 ************************************ 00:06:55.171 18:15:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.171 18:15:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.171 18:15:53 -- accel/accel.sh@17 -- # local accel_module 00:06:55.171 18:15:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.171 18:15:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.171 18:15:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.171 18:15:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.171 18:15:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.171 18:15:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.171 18:15:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.171 18:15:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.171 18:15:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.171 18:15:53 -- accel/accel.sh@42 -- # jq -r . 00:06:55.171 [2024-11-17 18:15:53.134864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.171 [2024-11-17 18:15:53.134953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67942 ] 00:06:55.171 [2024-11-17 18:15:53.273429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.171 [2024-11-17 18:15:53.308189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.551 18:15:54 -- accel/accel.sh@18 -- # out=' 00:06:56.551 SPDK Configuration: 00:06:56.551 Core mask: 0x1 00:06:56.551 00:06:56.551 Accel Perf Configuration: 00:06:56.551 Workload Type: crc32c 00:06:56.551 CRC-32C seed: 0 00:06:56.551 Transfer size: 4096 bytes 00:06:56.551 Vector count 2 00:06:56.551 Module: software 00:06:56.551 Queue depth: 32 00:06:56.551 Allocate depth: 32 00:06:56.551 # threads/core: 1 00:06:56.551 Run time: 1 seconds 00:06:56.551 Verify: Yes 00:06:56.551 00:06:56.551 Running for 1 seconds... 00:06:56.551 00:06:56.551 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.551 ------------------------------------------------------------------------------------ 00:06:56.551 0,0 409344/s 3198 MiB/s 0 0 00:06:56.551 ==================================================================================== 00:06:56.551 Total 409344/s 1599 MiB/s 0 0' 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:56.551 18:15:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:56.551 18:15:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.551 18:15:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.551 18:15:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.551 18:15:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.551 18:15:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.551 18:15:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.551 18:15:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.551 18:15:54 -- accel/accel.sh@42 -- # jq -r . 00:06:56.551 [2024-11-17 18:15:54.437276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.551 [2024-11-17 18:15:54.437375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67967 ] 00:06:56.551 [2024-11-17 18:15:54.564626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.551 [2024-11-17 18:15:54.593595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=0x1 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=crc32c 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=0 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=software 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=32 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=32 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=1 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val=Yes 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.551 18:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.551 18:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.551 18:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.486 18:15:55 -- accel/accel.sh@21 -- # val= 00:06:57.486 18:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.486 18:15:55 -- accel/accel.sh@21 -- # val= 00:06:57.486 18:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.486 18:15:55 -- accel/accel.sh@21 -- # val= 00:06:57.486 18:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.486 18:15:55 -- accel/accel.sh@21 -- # val= 00:06:57.486 18:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.486 18:15:55 -- accel/accel.sh@21 -- # val= 00:06:57.486 18:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.486 18:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.487 18:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.487 18:15:55 -- accel/accel.sh@21 -- # val= 00:06:57.487 18:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.487 18:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.487 18:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.487 18:15:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.487 18:15:55 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:57.487 18:15:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.487 00:06:57.487 real 0m2.596s 00:06:57.487 user 0m2.261s 00:06:57.487 sys 0m0.136s 00:06:57.487 18:15:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.487 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.487 ************************************ 00:06:57.487 END TEST accel_crc32c_C2 00:06:57.487 ************************************ 00:06:57.487 18:15:55 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:57.487 18:15:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:57.487 18:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.487 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:06:57.746 ************************************ 00:06:57.746 START TEST accel_copy 00:06:57.746 ************************************ 00:06:57.746 18:15:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:57.746 18:15:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.746 18:15:55 -- accel/accel.sh@17 -- # local accel_module 00:06:57.746 18:15:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:57.746 18:15:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:57.746 18:15:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.746 18:15:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.746 18:15:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.746 18:15:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.746 18:15:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.746 18:15:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.746 18:15:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.746 18:15:55 -- accel/accel.sh@42 -- # jq -r . 00:06:57.746 [2024-11-17 18:15:55.781874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.746 [2024-11-17 18:15:55.781968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67996 ] 00:06:57.746 [2024-11-17 18:15:55.915454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.746 [2024-11-17 18:15:55.944707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.124 18:15:57 -- accel/accel.sh@18 -- # out=' 00:06:59.124 SPDK Configuration: 00:06:59.124 Core mask: 0x1 00:06:59.124 00:06:59.124 Accel Perf Configuration: 00:06:59.124 Workload Type: copy 00:06:59.124 Transfer size: 4096 bytes 00:06:59.124 Vector count 1 00:06:59.124 Module: software 00:06:59.124 Queue depth: 32 00:06:59.124 Allocate depth: 32 00:06:59.124 # threads/core: 1 00:06:59.124 Run time: 1 seconds 00:06:59.124 Verify: Yes 00:06:59.124 00:06:59.124 Running for 1 seconds... 00:06:59.124 00:06:59.124 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.124 ------------------------------------------------------------------------------------ 00:06:59.124 0,0 358272/s 1399 MiB/s 0 0 00:06:59.124 ==================================================================================== 00:06:59.124 Total 358272/s 1399 MiB/s 0 0' 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.124 18:15:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.124 18:15:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.124 18:15:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.124 18:15:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.124 18:15:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.124 18:15:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.124 18:15:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.124 18:15:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.124 18:15:57 -- accel/accel.sh@42 -- # jq -r . 00:06:59.124 [2024-11-17 18:15:57.087585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.124 [2024-11-17 18:15:57.087678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68010 ] 00:06:59.124 [2024-11-17 18:15:57.222521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.124 [2024-11-17 18:15:57.254482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val=0x1 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val=copy 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.124 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.124 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.124 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val=software 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val=32 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val=32 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val=1 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val=Yes 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.125 18:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.125 18:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.125 18:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@21 -- # val= 00:07:00.502 18:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@21 -- # val= 00:07:00.502 18:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@21 -- # val= 00:07:00.502 18:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@21 -- # val= 00:07:00.502 18:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@21 -- # val= 00:07:00.502 18:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@21 -- # val= 00:07:00.502 18:15:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.502 18:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.502 18:15:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.502 18:15:58 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:00.502 18:15:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.502 00:07:00.502 real 0m2.620s 00:07:00.502 user 0m2.278s 00:07:00.502 sys 0m0.140s 00:07:00.502 18:15:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.502 18:15:58 -- common/autotest_common.sh@10 -- # set +x 00:07:00.502 ************************************ 00:07:00.502 END TEST accel_copy 00:07:00.502 ************************************ 00:07:00.502 18:15:58 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.502 18:15:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:00.502 18:15:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.502 18:15:58 -- common/autotest_common.sh@10 -- # set +x 00:07:00.502 ************************************ 00:07:00.502 START TEST accel_fill 00:07:00.502 ************************************ 00:07:00.502 18:15:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.502 18:15:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.502 18:15:58 -- accel/accel.sh@17 -- # local accel_module 00:07:00.502 18:15:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.502 18:15:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.502 18:15:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.502 18:15:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.502 18:15:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.502 18:15:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.502 18:15:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.502 18:15:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.502 18:15:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.502 18:15:58 -- accel/accel.sh@42 -- # jq -r . 00:07:00.502 [2024-11-17 18:15:58.456684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.502 [2024-11-17 18:15:58.456935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68045 ] 00:07:00.502 [2024-11-17 18:15:58.592203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.502 [2024-11-17 18:15:58.623419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.888 18:15:59 -- accel/accel.sh@18 -- # out=' 00:07:01.888 SPDK Configuration: 00:07:01.888 Core mask: 0x1 00:07:01.888 00:07:01.888 Accel Perf Configuration: 00:07:01.888 Workload Type: fill 00:07:01.888 Fill pattern: 0x80 00:07:01.888 Transfer size: 4096 bytes 00:07:01.888 Vector count 1 00:07:01.888 Module: software 00:07:01.888 Queue depth: 64 00:07:01.888 Allocate depth: 64 00:07:01.888 # threads/core: 1 00:07:01.888 Run time: 1 seconds 00:07:01.888 Verify: Yes 00:07:01.888 00:07:01.888 Running for 1 seconds... 00:07:01.888 00:07:01.888 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.888 ------------------------------------------------------------------------------------ 00:07:01.888 0,0 538944/s 2105 MiB/s 0 0 00:07:01.888 ==================================================================================== 00:07:01.888 Total 538944/s 2105 MiB/s 0 0' 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.888 18:15:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.888 18:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.888 18:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.888 18:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.888 18:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.888 18:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.888 18:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.888 18:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.888 18:15:59 -- accel/accel.sh@42 -- # jq -r . 00:07:01.888 [2024-11-17 18:15:59.759784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.888 [2024-11-17 18:15:59.759871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68064 ] 00:07:01.888 [2024-11-17 18:15:59.894117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.888 [2024-11-17 18:15:59.923008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=0x1 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=fill 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=0x80 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=software 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=64 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=64 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=1 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val=Yes 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.888 18:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.888 18:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.888 18:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@21 -- # val= 00:07:02.825 18:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@21 -- # val= 00:07:02.825 18:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@21 -- # val= 00:07:02.825 18:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@21 -- # val= 00:07:02.825 ************************************ 00:07:02.825 18:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@21 -- # val= 00:07:02.825 18:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@21 -- # val= 00:07:02.825 18:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:02.825 18:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:02.825 18:16:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.825 18:16:01 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:02.825 18:16:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.825 00:07:02.825 real 0m2.607s 00:07:02.825 user 0m2.262s 00:07:02.825 sys 0m0.143s 00:07:02.825 18:16:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.825 18:16:01 -- common/autotest_common.sh@10 -- # set +x 00:07:02.825 END TEST accel_fill 00:07:02.825 ************************************ 00:07:02.825 18:16:01 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.825 18:16:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:02.825 18:16:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.825 18:16:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.084 ************************************ 00:07:03.084 START TEST accel_copy_crc32c 00:07:03.084 ************************************ 00:07:03.084 18:16:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.084 18:16:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.084 18:16:01 -- accel/accel.sh@17 -- # local accel_module 00:07:03.084 18:16:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.084 18:16:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.084 18:16:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.084 18:16:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.084 18:16:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.084 18:16:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.084 18:16:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.084 18:16:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.084 18:16:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.084 18:16:01 -- accel/accel.sh@42 -- # jq -r . 00:07:03.084 [2024-11-17 18:16:01.113541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.084 [2024-11-17 18:16:01.113630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68093 ] 00:07:03.084 [2024-11-17 18:16:01.248514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.084 [2024-11-17 18:16:01.280130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.463 18:16:02 -- accel/accel.sh@18 -- # out=' 00:07:04.463 SPDK Configuration: 00:07:04.463 Core mask: 0x1 00:07:04.463 00:07:04.463 Accel Perf Configuration: 00:07:04.463 Workload Type: copy_crc32c 00:07:04.463 CRC-32C seed: 0 00:07:04.463 Vector size: 4096 bytes 00:07:04.463 Transfer size: 4096 bytes 00:07:04.463 Vector count 1 00:07:04.463 Module: software 00:07:04.463 Queue depth: 32 00:07:04.463 Allocate depth: 32 00:07:04.463 # threads/core: 1 00:07:04.463 Run time: 1 seconds 00:07:04.463 Verify: Yes 00:07:04.463 00:07:04.463 Running for 1 seconds... 00:07:04.463 00:07:04.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.463 ------------------------------------------------------------------------------------ 00:07:04.463 0,0 288608/s 1127 MiB/s 0 0 00:07:04.463 ==================================================================================== 00:07:04.463 Total 288608/s 1127 MiB/s 0 0' 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.463 18:16:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.463 18:16:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.463 18:16:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.463 18:16:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.463 18:16:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.463 18:16:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.463 18:16:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.463 18:16:02 -- accel/accel.sh@42 -- # jq -r . 00:07:04.463 [2024-11-17 18:16:02.424108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.463 [2024-11-17 18:16:02.424492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68113 ] 00:07:04.463 [2024-11-17 18:16:02.559508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.463 [2024-11-17 18:16:02.588549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val=0x1 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val=0 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.463 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.463 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.463 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val=software 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val=32 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val=32 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val=1 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val=Yes 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.464 18:16:02 -- accel/accel.sh@21 -- # val= 00:07:04.464 18:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.464 18:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.871 18:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.871 18:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.871 18:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.871 18:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.871 18:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.871 18:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.871 18:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.871 18:16:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.871 18:16:03 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:05.871 18:16:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.871 00:07:05.871 real 0m2.614s 00:07:05.871 user 0m2.288s 00:07:05.871 sys 0m0.126s 00:07:05.871 18:16:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.871 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.871 ************************************ 00:07:05.871 END TEST accel_copy_crc32c 00:07:05.872 ************************************ 00:07:05.872 18:16:03 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.872 18:16:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:05.872 18:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.872 18:16:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.872 ************************************ 00:07:05.872 START TEST accel_copy_crc32c_C2 00:07:05.872 ************************************ 00:07:05.872 18:16:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.872 18:16:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.872 18:16:03 -- accel/accel.sh@17 -- # local accel_module 00:07:05.872 18:16:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:05.872 18:16:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.872 18:16:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:05.872 18:16:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.872 18:16:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.872 18:16:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.872 18:16:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.872 18:16:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.872 18:16:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.872 18:16:03 -- accel/accel.sh@42 -- # jq -r . 00:07:05.872 [2024-11-17 18:16:03.777121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.872 [2024-11-17 18:16:03.777214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68147 ] 00:07:05.872 [2024-11-17 18:16:03.912496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.872 [2024-11-17 18:16:03.943224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.809 18:16:05 -- accel/accel.sh@18 -- # out=' 00:07:06.809 SPDK Configuration: 00:07:06.809 Core mask: 0x1 00:07:06.809 00:07:06.809 Accel Perf Configuration: 00:07:06.809 Workload Type: copy_crc32c 00:07:06.809 CRC-32C seed: 0 00:07:06.809 Vector size: 4096 bytes 00:07:06.809 Transfer size: 8192 bytes 00:07:06.809 Vector count 2 00:07:06.809 Module: software 00:07:06.809 Queue depth: 32 00:07:06.809 Allocate depth: 32 00:07:06.809 # threads/core: 1 00:07:06.809 Run time: 1 seconds 00:07:06.809 Verify: Yes 00:07:06.809 00:07:06.809 Running for 1 seconds... 00:07:06.809 00:07:06.809 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.809 ------------------------------------------------------------------------------------ 00:07:06.809 0,0 203072/s 1586 MiB/s 0 0 00:07:06.809 ==================================================================================== 00:07:06.809 Total 203072/s 793 MiB/s 0 0' 00:07:06.809 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.809 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.809 18:16:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.809 18:16:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.809 18:16:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.809 18:16:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.809 18:16:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.809 18:16:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.809 18:16:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.809 18:16:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.809 18:16:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.809 18:16:05 -- accel/accel.sh@42 -- # jq -r . 00:07:07.069 [2024-11-17 18:16:05.078133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.069 [2024-11-17 18:16:05.078294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68161 ] 00:07:07.069 [2024-11-17 18:16:05.214700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.069 [2024-11-17 18:16:05.245404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=0x1 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=0 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=software 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=32 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=32 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=1 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val=Yes 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.069 18:16:05 -- accel/accel.sh@21 -- # val= 00:07:07.069 18:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.069 18:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.447 18:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.447 18:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.447 18:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.447 18:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.447 18:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.447 18:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.447 18:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.447 18:16:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.447 18:16:06 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:08.447 18:16:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.447 00:07:08.447 real 0m2.613s 00:07:08.447 user 0m2.278s 00:07:08.447 sys 0m0.134s 00:07:08.447 18:16:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.447 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 ************************************ 00:07:08.447 END TEST accel_copy_crc32c_C2 00:07:08.447 ************************************ 00:07:08.447 18:16:06 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:08.448 18:16:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:08.448 18:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.448 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:07:08.448 ************************************ 00:07:08.448 START TEST accel_dualcast 00:07:08.448 ************************************ 00:07:08.448 18:16:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:08.448 18:16:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.448 18:16:06 -- accel/accel.sh@17 -- # local accel_module 00:07:08.448 18:16:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:08.448 18:16:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:08.448 18:16:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.448 18:16:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.448 18:16:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.448 18:16:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.448 18:16:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.448 18:16:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.448 18:16:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.448 18:16:06 -- accel/accel.sh@42 -- # jq -r . 00:07:08.448 [2024-11-17 18:16:06.441688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.448 [2024-11-17 18:16:06.441944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68190 ] 00:07:08.448 [2024-11-17 18:16:06.579476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.448 [2024-11-17 18:16:06.609383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.829 18:16:07 -- accel/accel.sh@18 -- # out=' 00:07:09.829 SPDK Configuration: 00:07:09.829 Core mask: 0x1 00:07:09.829 00:07:09.829 Accel Perf Configuration: 00:07:09.829 Workload Type: dualcast 00:07:09.829 Transfer size: 4096 bytes 00:07:09.829 Vector count 1 00:07:09.829 Module: software 00:07:09.829 Queue depth: 32 00:07:09.829 Allocate depth: 32 00:07:09.829 # threads/core: 1 00:07:09.829 Run time: 1 seconds 00:07:09.829 Verify: Yes 00:07:09.829 00:07:09.829 Running for 1 seconds... 00:07:09.829 00:07:09.829 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.829 ------------------------------------------------------------------------------------ 00:07:09.829 0,0 386560/s 1510 MiB/s 0 0 00:07:09.829 ==================================================================================== 00:07:09.829 Total 386560/s 1510 MiB/s 0 0' 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:09.829 18:16:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:09.829 18:16:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.829 18:16:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.829 18:16:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.829 18:16:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.829 18:16:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.829 18:16:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.829 18:16:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.829 18:16:07 -- accel/accel.sh@42 -- # jq -r . 00:07:09.829 [2024-11-17 18:16:07.747577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.829 [2024-11-17 18:16:07.747677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68210 ] 00:07:09.829 [2024-11-17 18:16:07.875148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.829 [2024-11-17 18:16:07.911946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val=0x1 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val=dualcast 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val=software 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val=32 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.829 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.829 18:16:07 -- accel/accel.sh@21 -- # val=32 00:07:09.829 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.830 18:16:07 -- accel/accel.sh@21 -- # val=1 00:07:09.830 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.830 18:16:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.830 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.830 18:16:07 -- accel/accel.sh@21 -- # val=Yes 00:07:09.830 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.830 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.830 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.830 18:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.830 18:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.830 18:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 18:16:09 -- accel/accel.sh@21 -- # val= 00:07:11.209 18:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 18:16:09 -- accel/accel.sh@21 -- # val= 00:07:11.209 18:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 18:16:09 -- accel/accel.sh@21 -- # val= 00:07:11.209 18:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 18:16:09 -- accel/accel.sh@21 -- # val= 00:07:11.209 18:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 18:16:09 -- accel/accel.sh@21 -- # val= 00:07:11.209 18:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 18:16:09 -- accel/accel.sh@21 -- # val= 00:07:11.209 18:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 18:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 ************************************ 00:07:11.209 END TEST accel_dualcast 00:07:11.209 ************************************ 00:07:11.209 18:16:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.209 18:16:09 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:11.209 18:16:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.209 00:07:11.209 real 0m2.631s 00:07:11.209 user 0m2.293s 00:07:11.209 sys 0m0.135s 00:07:11.209 18:16:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.209 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.209 18:16:09 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:11.209 18:16:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.209 18:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.209 18:16:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.209 ************************************ 00:07:11.209 START TEST accel_compare 00:07:11.209 ************************************ 00:07:11.209 18:16:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:11.209 18:16:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.209 18:16:09 -- accel/accel.sh@17 -- # local accel_module 00:07:11.209 18:16:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:11.209 18:16:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:11.209 18:16:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.209 18:16:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.209 18:16:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.209 18:16:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.209 18:16:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.209 18:16:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.209 18:16:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.209 18:16:09 -- accel/accel.sh@42 -- # jq -r . 00:07:11.209 [2024-11-17 18:16:09.119355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.209 [2024-11-17 18:16:09.119461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68244 ] 00:07:11.209 [2024-11-17 18:16:09.254976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.209 [2024-11-17 18:16:09.285148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.147 18:16:10 -- accel/accel.sh@18 -- # out=' 00:07:12.147 SPDK Configuration: 00:07:12.147 Core mask: 0x1 00:07:12.147 00:07:12.147 Accel Perf Configuration: 00:07:12.147 Workload Type: compare 00:07:12.147 Transfer size: 4096 bytes 00:07:12.147 Vector count 1 00:07:12.147 Module: software 00:07:12.147 Queue depth: 32 00:07:12.147 Allocate depth: 32 00:07:12.147 # threads/core: 1 00:07:12.147 Run time: 1 seconds 00:07:12.147 Verify: Yes 00:07:12.147 00:07:12.147 Running for 1 seconds... 00:07:12.147 00:07:12.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.147 ------------------------------------------------------------------------------------ 00:07:12.147 0,0 517280/s 2020 MiB/s 0 0 00:07:12.147 ==================================================================================== 00:07:12.147 Total 517280/s 2020 MiB/s 0 0' 00:07:12.147 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.147 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.147 18:16:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:12.147 18:16:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.147 18:16:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:12.147 18:16:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.147 18:16:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.147 18:16:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.147 18:16:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.147 18:16:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.147 18:16:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.147 18:16:10 -- accel/accel.sh@42 -- # jq -r . 00:07:12.406 [2024-11-17 18:16:10.423685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.406 [2024-11-17 18:16:10.423925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68260 ] 00:07:12.406 [2024-11-17 18:16:10.558869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.406 [2024-11-17 18:16:10.592055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val=0x1 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val=compare 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val=software 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val=32 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val=32 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val=1 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.406 18:16:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.406 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.406 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.407 18:16:10 -- accel/accel.sh@21 -- # val=Yes 00:07:12.407 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.407 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.407 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.407 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.407 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.407 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.407 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.407 18:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.407 18:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.407 18:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.407 18:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.785 18:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.785 18:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.785 18:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.785 18:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.785 18:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.785 18:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.785 18:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.785 18:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.785 18:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.786 18:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.786 18:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.786 18:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.786 18:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.786 18:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.786 18:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.786 18:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.786 18:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.786 18:16:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.786 18:16:11 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:13.786 18:16:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.786 00:07:13.786 real 0m2.614s 00:07:13.786 user 0m2.277s 00:07:13.786 sys 0m0.137s 00:07:13.786 ************************************ 00:07:13.786 END TEST accel_compare 00:07:13.786 ************************************ 00:07:13.786 18:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.786 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.786 18:16:11 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:13.786 18:16:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:13.786 18:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.786 18:16:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.786 ************************************ 00:07:13.786 START TEST accel_xor 00:07:13.786 ************************************ 00:07:13.786 18:16:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:13.786 18:16:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.786 18:16:11 -- accel/accel.sh@17 -- # local accel_module 00:07:13.786 18:16:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:13.786 18:16:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:13.786 18:16:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.786 18:16:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.786 18:16:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.786 18:16:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.786 18:16:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.786 18:16:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.786 18:16:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.786 18:16:11 -- accel/accel.sh@42 -- # jq -r . 00:07:13.786 [2024-11-17 18:16:11.781643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.786 [2024-11-17 18:16:11.781744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68295 ] 00:07:13.786 [2024-11-17 18:16:11.915451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.786 [2024-11-17 18:16:11.945491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.165 18:16:13 -- accel/accel.sh@18 -- # out=' 00:07:15.165 SPDK Configuration: 00:07:15.165 Core mask: 0x1 00:07:15.165 00:07:15.165 Accel Perf Configuration: 00:07:15.165 Workload Type: xor 00:07:15.165 Source buffers: 2 00:07:15.165 Transfer size: 4096 bytes 00:07:15.165 Vector count 1 00:07:15.165 Module: software 00:07:15.165 Queue depth: 32 00:07:15.165 Allocate depth: 32 00:07:15.165 # threads/core: 1 00:07:15.165 Run time: 1 seconds 00:07:15.165 Verify: Yes 00:07:15.165 00:07:15.165 Running for 1 seconds... 00:07:15.165 00:07:15.165 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.165 ------------------------------------------------------------------------------------ 00:07:15.165 0,0 286240/s 1118 MiB/s 0 0 00:07:15.165 ==================================================================================== 00:07:15.165 Total 286240/s 1118 MiB/s 0 0' 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 18:16:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:15.165 18:16:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:15.165 18:16:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.165 18:16:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.165 18:16:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.165 18:16:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.165 18:16:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.165 18:16:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.165 18:16:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.165 18:16:13 -- accel/accel.sh@42 -- # jq -r . 00:07:15.165 [2024-11-17 18:16:13.073760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.165 [2024-11-17 18:16:13.073844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68309 ] 00:07:15.165 [2024-11-17 18:16:13.202025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.165 [2024-11-17 18:16:13.233865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.165 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.165 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.165 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 18:16:13 -- accel/accel.sh@21 -- # val=0x1 00:07:15.165 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.165 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.165 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.165 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.165 18:16:13 -- accel/accel.sh@21 -- # val=xor 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val=2 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val=software 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val=32 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val=32 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val=1 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val=Yes 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.166 18:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.166 18:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.166 18:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.103 18:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.103 18:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.103 18:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.103 18:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.103 18:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.103 18:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.103 18:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.103 18:16:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.103 18:16:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:16.104 18:16:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.104 00:07:16.104 real 0m2.591s 00:07:16.104 user 0m2.263s 00:07:16.104 sys 0m0.128s 00:07:16.104 18:16:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.104 ************************************ 00:07:16.104 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.104 END TEST accel_xor 00:07:16.104 ************************************ 00:07:16.363 18:16:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:16.363 18:16:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:16.363 18:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.363 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.363 ************************************ 00:07:16.363 START TEST accel_xor 00:07:16.363 ************************************ 00:07:16.363 18:16:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:16.363 18:16:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.363 18:16:14 -- accel/accel.sh@17 -- # local accel_module 00:07:16.363 18:16:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:16.363 18:16:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:16.363 18:16:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.363 18:16:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.363 18:16:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.363 18:16:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.363 18:16:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.363 18:16:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.363 18:16:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.363 18:16:14 -- accel/accel.sh@42 -- # jq -r . 00:07:16.363 [2024-11-17 18:16:14.420991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.363 [2024-11-17 18:16:14.421094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68343 ] 00:07:16.363 [2024-11-17 18:16:14.554944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.363 [2024-11-17 18:16:14.586031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.742 18:16:15 -- accel/accel.sh@18 -- # out=' 00:07:17.742 SPDK Configuration: 00:07:17.742 Core mask: 0x1 00:07:17.742 00:07:17.742 Accel Perf Configuration: 00:07:17.742 Workload Type: xor 00:07:17.743 Source buffers: 3 00:07:17.743 Transfer size: 4096 bytes 00:07:17.743 Vector count 1 00:07:17.743 Module: software 00:07:17.743 Queue depth: 32 00:07:17.743 Allocate depth: 32 00:07:17.743 # threads/core: 1 00:07:17.743 Run time: 1 seconds 00:07:17.743 Verify: Yes 00:07:17.743 00:07:17.743 Running for 1 seconds... 00:07:17.743 00:07:17.743 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.743 ------------------------------------------------------------------------------------ 00:07:17.743 0,0 273472/s 1068 MiB/s 0 0 00:07:17.743 ==================================================================================== 00:07:17.743 Total 273472/s 1068 MiB/s 0 0' 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.743 18:16:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.743 18:16:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.743 18:16:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.743 18:16:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.743 18:16:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.743 18:16:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.743 18:16:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.743 18:16:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.743 18:16:15 -- accel/accel.sh@42 -- # jq -r . 00:07:17.743 [2024-11-17 18:16:15.733633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.743 [2024-11-17 18:16:15.733732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68366 ] 00:07:17.743 [2024-11-17 18:16:15.866611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.743 [2024-11-17 18:16:15.896366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=0x1 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=xor 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=3 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=software 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=32 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=32 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=1 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val=Yes 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:17.743 18:16:15 -- accel/accel.sh@21 -- # val= 00:07:17.743 18:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:17.743 18:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 18:16:17 -- accel/accel.sh@21 -- # val= 00:07:19.124 18:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 18:16:17 -- accel/accel.sh@21 -- # val= 00:07:19.124 18:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 18:16:17 -- accel/accel.sh@21 -- # val= 00:07:19.124 18:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 18:16:17 -- accel/accel.sh@21 -- # val= 00:07:19.124 18:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 ************************************ 00:07:19.124 END TEST accel_xor 00:07:19.124 ************************************ 00:07:19.124 18:16:17 -- accel/accel.sh@21 -- # val= 00:07:19.124 18:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 18:16:17 -- accel/accel.sh@21 -- # val= 00:07:19.124 18:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.124 18:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.124 18:16:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.124 18:16:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:19.124 18:16:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.124 00:07:19.124 real 0m2.615s 00:07:19.124 user 0m2.284s 00:07:19.124 sys 0m0.130s 00:07:19.124 18:16:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.124 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.124 18:16:17 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.124 18:16:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:19.124 18:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.124 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.124 ************************************ 00:07:19.124 START TEST accel_dif_verify 00:07:19.124 ************************************ 00:07:19.124 18:16:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:19.124 18:16:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.124 18:16:17 -- accel/accel.sh@17 -- # local accel_module 00:07:19.124 18:16:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:19.124 18:16:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.124 18:16:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.124 18:16:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.124 18:16:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.124 18:16:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.124 18:16:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.124 18:16:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.124 18:16:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.124 18:16:17 -- accel/accel.sh@42 -- # jq -r . 00:07:19.124 [2024-11-17 18:16:17.088761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.124 [2024-11-17 18:16:17.088852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68395 ] 00:07:19.124 [2024-11-17 18:16:17.222926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.124 [2024-11-17 18:16:17.252110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.503 18:16:18 -- accel/accel.sh@18 -- # out=' 00:07:20.503 SPDK Configuration: 00:07:20.503 Core mask: 0x1 00:07:20.503 00:07:20.503 Accel Perf Configuration: 00:07:20.503 Workload Type: dif_verify 00:07:20.503 Vector size: 4096 bytes 00:07:20.503 Transfer size: 4096 bytes 00:07:20.503 Block size: 512 bytes 00:07:20.503 Metadata size: 8 bytes 00:07:20.503 Vector count 1 00:07:20.503 Module: software 00:07:20.503 Queue depth: 32 00:07:20.503 Allocate depth: 32 00:07:20.503 # threads/core: 1 00:07:20.503 Run time: 1 seconds 00:07:20.503 Verify: No 00:07:20.503 00:07:20.503 Running for 1 seconds... 00:07:20.503 00:07:20.503 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.503 ------------------------------------------------------------------------------------ 00:07:20.503 0,0 118592/s 470 MiB/s 0 0 00:07:20.503 ==================================================================================== 00:07:20.504 Total 118592/s 463 MiB/s 0 0' 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:20.504 18:16:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:20.504 18:16:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.504 18:16:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.504 18:16:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.504 18:16:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.504 18:16:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.504 18:16:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.504 18:16:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.504 18:16:18 -- accel/accel.sh@42 -- # jq -r . 00:07:20.504 [2024-11-17 18:16:18.394075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.504 [2024-11-17 18:16:18.394187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68409 ] 00:07:20.504 [2024-11-17 18:16:18.530638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.504 [2024-11-17 18:16:18.560178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=0x1 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=dif_verify 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=software 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=32 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=32 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=1 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val=No 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:20.504 18:16:18 -- accel/accel.sh@21 -- # val= 00:07:20.504 18:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:20.504 18:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.441 18:16:19 -- accel/accel.sh@21 -- # val= 00:07:21.441 18:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.441 18:16:19 -- accel/accel.sh@21 -- # val= 00:07:21.441 18:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.441 18:16:19 -- accel/accel.sh@21 -- # val= 00:07:21.441 18:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.441 18:16:19 -- accel/accel.sh@21 -- # val= 00:07:21.441 18:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.441 18:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.442 18:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.442 18:16:19 -- accel/accel.sh@21 -- # val= 00:07:21.442 18:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.442 18:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.442 18:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.442 18:16:19 -- accel/accel.sh@21 -- # val= 00:07:21.442 18:16:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.442 18:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.442 ************************************ 00:07:21.442 END TEST accel_dif_verify 00:07:21.442 ************************************ 00:07:21.442 18:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.442 18:16:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.442 18:16:19 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:21.442 18:16:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.442 00:07:21.442 real 0m2.614s 00:07:21.442 user 0m2.288s 00:07:21.442 sys 0m0.125s 00:07:21.442 18:16:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.442 18:16:19 -- common/autotest_common.sh@10 -- # set +x 00:07:21.701 18:16:19 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:21.701 18:16:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:21.701 18:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.701 18:16:19 -- common/autotest_common.sh@10 -- # set +x 00:07:21.701 ************************************ 00:07:21.701 START TEST accel_dif_generate 00:07:21.701 ************************************ 00:07:21.701 18:16:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:21.701 18:16:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.701 18:16:19 -- accel/accel.sh@17 -- # local accel_module 00:07:21.701 18:16:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:21.701 18:16:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:21.701 18:16:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.701 18:16:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.701 18:16:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.701 18:16:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.701 18:16:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.701 18:16:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.701 18:16:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.701 18:16:19 -- accel/accel.sh@42 -- # jq -r . 00:07:21.701 [2024-11-17 18:16:19.750289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.701 [2024-11-17 18:16:19.750422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68449 ] 00:07:21.701 [2024-11-17 18:16:19.886616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.701 [2024-11-17 18:16:19.916391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.080 18:16:21 -- accel/accel.sh@18 -- # out=' 00:07:23.080 SPDK Configuration: 00:07:23.080 Core mask: 0x1 00:07:23.080 00:07:23.080 Accel Perf Configuration: 00:07:23.080 Workload Type: dif_generate 00:07:23.080 Vector size: 4096 bytes 00:07:23.080 Transfer size: 4096 bytes 00:07:23.080 Block size: 512 bytes 00:07:23.080 Metadata size: 8 bytes 00:07:23.080 Vector count 1 00:07:23.080 Module: software 00:07:23.080 Queue depth: 32 00:07:23.080 Allocate depth: 32 00:07:23.080 # threads/core: 1 00:07:23.080 Run time: 1 seconds 00:07:23.080 Verify: No 00:07:23.080 00:07:23.080 Running for 1 seconds... 00:07:23.080 00:07:23.080 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.080 ------------------------------------------------------------------------------------ 00:07:23.080 0,0 140352/s 556 MiB/s 0 0 00:07:23.080 ==================================================================================== 00:07:23.080 Total 140352/s 548 MiB/s 0 0' 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:23.080 18:16:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.080 18:16:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.080 18:16:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.080 18:16:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.080 18:16:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.080 18:16:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.080 18:16:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.080 18:16:21 -- accel/accel.sh@42 -- # jq -r . 00:07:23.080 [2024-11-17 18:16:21.054602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.080 [2024-11-17 18:16:21.054714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68463 ] 00:07:23.080 [2024-11-17 18:16:21.181522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.080 [2024-11-17 18:16:21.210847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=0x1 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=dif_generate 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=software 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=32 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=32 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=1 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val=No 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.080 18:16:21 -- accel/accel.sh@21 -- # val= 00:07:23.080 18:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.080 18:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@21 -- # val= 00:07:24.459 18:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@21 -- # val= 00:07:24.459 18:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@21 -- # val= 00:07:24.459 18:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@21 -- # val= 00:07:24.459 ************************************ 00:07:24.459 END TEST accel_dif_generate 00:07:24.459 ************************************ 00:07:24.459 18:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@21 -- # val= 00:07:24.459 18:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@21 -- # val= 00:07:24.459 18:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.459 18:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.459 18:16:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.459 18:16:22 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:24.459 18:16:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.459 00:07:24.459 real 0m2.598s 00:07:24.459 user 0m2.264s 00:07:24.459 sys 0m0.134s 00:07:24.459 18:16:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.459 18:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.459 18:16:22 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:24.459 18:16:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:24.459 18:16:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.459 18:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.459 ************************************ 00:07:24.459 START TEST accel_dif_generate_copy 00:07:24.459 ************************************ 00:07:24.459 18:16:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:24.459 18:16:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.459 18:16:22 -- accel/accel.sh@17 -- # local accel_module 00:07:24.459 18:16:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:24.459 18:16:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:24.459 18:16:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.459 18:16:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.459 18:16:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.459 18:16:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.459 18:16:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.459 18:16:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.459 18:16:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.459 18:16:22 -- accel/accel.sh@42 -- # jq -r . 00:07:24.459 [2024-11-17 18:16:22.404057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.459 [2024-11-17 18:16:22.404146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68492 ] 00:07:24.459 [2024-11-17 18:16:22.538870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.459 [2024-11-17 18:16:22.568296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.835 18:16:23 -- accel/accel.sh@18 -- # out=' 00:07:25.835 SPDK Configuration: 00:07:25.835 Core mask: 0x1 00:07:25.835 00:07:25.835 Accel Perf Configuration: 00:07:25.835 Workload Type: dif_generate_copy 00:07:25.835 Vector size: 4096 bytes 00:07:25.835 Transfer size: 4096 bytes 00:07:25.835 Vector count 1 00:07:25.835 Module: software 00:07:25.835 Queue depth: 32 00:07:25.835 Allocate depth: 32 00:07:25.835 # threads/core: 1 00:07:25.835 Run time: 1 seconds 00:07:25.835 Verify: No 00:07:25.835 00:07:25.835 Running for 1 seconds... 00:07:25.835 00:07:25.835 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.835 ------------------------------------------------------------------------------------ 00:07:25.835 0,0 109984/s 436 MiB/s 0 0 00:07:25.835 ==================================================================================== 00:07:25.835 Total 109984/s 429 MiB/s 0 0' 00:07:25.835 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.835 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.835 18:16:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:25.835 18:16:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:25.835 18:16:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.835 18:16:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.835 18:16:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.835 18:16:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.836 18:16:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.836 18:16:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.836 18:16:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.836 18:16:23 -- accel/accel.sh@42 -- # jq -r . 00:07:25.836 [2024-11-17 18:16:23.712647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.836 [2024-11-17 18:16:23.712750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68510 ] 00:07:25.836 [2024-11-17 18:16:23.847872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.836 [2024-11-17 18:16:23.877809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=0x1 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=software 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=32 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=32 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=1 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val=No 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.836 18:16:23 -- accel/accel.sh@21 -- # val= 00:07:25.836 18:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.836 18:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@21 -- # val= 00:07:26.771 18:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@21 -- # val= 00:07:26.771 18:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@21 -- # val= 00:07:26.771 18:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@21 -- # val= 00:07:26.771 18:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@21 -- # val= 00:07:26.771 18:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@21 -- # val= 00:07:26.771 18:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.771 18:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.771 18:16:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.771 18:16:24 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:26.771 18:16:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.771 00:07:26.771 real 0m2.611s 00:07:26.771 user 0m2.282s 00:07:26.771 sys 0m0.127s 00:07:26.771 18:16:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.771 18:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:26.771 ************************************ 00:07:26.771 END TEST accel_dif_generate_copy 00:07:26.771 ************************************ 00:07:26.771 18:16:25 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:26.771 18:16:25 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.771 18:16:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:26.771 18:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.771 18:16:25 -- common/autotest_common.sh@10 -- # set +x 00:07:27.030 ************************************ 00:07:27.030 START TEST accel_comp 00:07:27.030 ************************************ 00:07:27.030 18:16:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.030 18:16:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.030 18:16:25 -- accel/accel.sh@17 -- # local accel_module 00:07:27.030 18:16:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.030 18:16:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.030 18:16:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.030 18:16:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.030 18:16:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.030 18:16:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.030 18:16:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.030 18:16:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.030 18:16:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.030 18:16:25 -- accel/accel.sh@42 -- # jq -r . 00:07:27.030 [2024-11-17 18:16:25.068980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.030 [2024-11-17 18:16:25.069086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68546 ] 00:07:27.030 [2024-11-17 18:16:25.200229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.030 [2024-11-17 18:16:25.229733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.409 18:16:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.409 00:07:28.409 SPDK Configuration: 00:07:28.409 Core mask: 0x1 00:07:28.409 00:07:28.409 Accel Perf Configuration: 00:07:28.409 Workload Type: compress 00:07:28.409 Transfer size: 4096 bytes 00:07:28.409 Vector count 1 00:07:28.409 Module: software 00:07:28.409 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.409 Queue depth: 32 00:07:28.409 Allocate depth: 32 00:07:28.409 # threads/core: 1 00:07:28.409 Run time: 1 seconds 00:07:28.409 Verify: No 00:07:28.409 00:07:28.409 Running for 1 seconds... 00:07:28.409 00:07:28.409 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.409 ------------------------------------------------------------------------------------ 00:07:28.409 0,0 55488/s 231 MiB/s 0 0 00:07:28.409 ==================================================================================== 00:07:28.409 Total 55488/s 216 MiB/s 0 0' 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.409 18:16:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.409 18:16:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.409 18:16:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.409 18:16:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.409 18:16:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.409 18:16:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.409 18:16:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.409 18:16:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.409 18:16:26 -- accel/accel.sh@42 -- # jq -r . 00:07:28.409 [2024-11-17 18:16:26.366953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.409 [2024-11-17 18:16:26.367053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68560 ] 00:07:28.409 [2024-11-17 18:16:26.500891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.409 [2024-11-17 18:16:26.530132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=0x1 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=compress 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=software 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=32 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=32 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=1 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val=No 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.409 18:16:26 -- accel/accel.sh@21 -- # val= 00:07:28.409 18:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.409 18:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.787 18:16:27 -- accel/accel.sh@21 -- # val= 00:07:29.787 18:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.787 18:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.787 18:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.787 18:16:27 -- accel/accel.sh@21 -- # val= 00:07:29.787 18:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.787 18:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.787 18:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.787 18:16:27 -- accel/accel.sh@21 -- # val= 00:07:29.787 18:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.787 18:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.787 18:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.788 18:16:27 -- accel/accel.sh@21 -- # val= 00:07:29.788 18:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.788 18:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.788 18:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.788 18:16:27 -- accel/accel.sh@21 -- # val= 00:07:29.788 18:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.788 18:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.788 18:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.788 ************************************ 00:07:29.788 END TEST accel_comp 00:07:29.788 ************************************ 00:07:29.788 18:16:27 -- accel/accel.sh@21 -- # val= 00:07:29.788 18:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.788 18:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.788 18:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.788 18:16:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.788 18:16:27 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:29.788 18:16:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.788 00:07:29.788 real 0m2.604s 00:07:29.788 user 0m2.280s 00:07:29.788 sys 0m0.123s 00:07:29.788 18:16:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.788 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:29.788 18:16:27 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.788 18:16:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:29.788 18:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.788 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:29.788 ************************************ 00:07:29.788 START TEST accel_decomp 00:07:29.788 ************************************ 00:07:29.788 18:16:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.788 18:16:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.788 18:16:27 -- accel/accel.sh@17 -- # local accel_module 00:07:29.788 18:16:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.788 18:16:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.788 18:16:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.788 18:16:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.788 18:16:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.788 18:16:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.788 18:16:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.788 18:16:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.788 18:16:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.788 18:16:27 -- accel/accel.sh@42 -- # jq -r . 00:07:29.788 [2024-11-17 18:16:27.731694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.788 [2024-11-17 18:16:27.731814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68589 ] 00:07:29.788 [2024-11-17 18:16:27.874262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.788 [2024-11-17 18:16:27.911068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.772 18:16:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.772 00:07:30.772 SPDK Configuration: 00:07:30.772 Core mask: 0x1 00:07:30.772 00:07:30.772 Accel Perf Configuration: 00:07:30.772 Workload Type: decompress 00:07:30.772 Transfer size: 4096 bytes 00:07:30.772 Vector count 1 00:07:30.772 Module: software 00:07:30.772 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.772 Queue depth: 32 00:07:30.772 Allocate depth: 32 00:07:30.772 # threads/core: 1 00:07:30.772 Run time: 1 seconds 00:07:30.772 Verify: Yes 00:07:30.772 00:07:30.772 Running for 1 seconds... 00:07:30.772 00:07:30.772 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.772 ------------------------------------------------------------------------------------ 00:07:30.772 0,0 80032/s 147 MiB/s 0 0 00:07:30.772 ==================================================================================== 00:07:30.772 Total 80032/s 312 MiB/s 0 0' 00:07:31.050 18:16:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.050 18:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.050 18:16:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.050 18:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.050 18:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.050 18:16:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.050 18:16:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.050 18:16:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.050 18:16:29 -- accel/accel.sh@42 -- # jq -r . 00:07:31.050 [2024-11-17 18:16:29.050049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.050 [2024-11-17 18:16:29.050120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68614 ] 00:07:31.050 [2024-11-17 18:16:29.178676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.050 [2024-11-17 18:16:29.208412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=0x1 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=decompress 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=software 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=32 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=32 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=1 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val=Yes 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.050 18:16:29 -- accel/accel.sh@21 -- # val= 00:07:31.050 18:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.050 18:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:32.428 18:16:30 -- accel/accel.sh@21 -- # val= 00:07:32.428 18:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.428 18:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.428 18:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.428 18:16:30 -- accel/accel.sh@21 -- # val= 00:07:32.428 18:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.428 18:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.428 18:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.428 18:16:30 -- accel/accel.sh@21 -- # val= 00:07:32.429 18:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.429 18:16:30 -- accel/accel.sh@21 -- # val= 00:07:32.429 18:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.429 18:16:30 -- accel/accel.sh@21 -- # val= 00:07:32.429 ************************************ 00:07:32.429 END TEST accel_decomp 00:07:32.429 ************************************ 00:07:32.429 18:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.429 18:16:30 -- accel/accel.sh@21 -- # val= 00:07:32.429 18:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.429 18:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.429 18:16:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.429 18:16:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:32.429 18:16:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.429 00:07:32.429 real 0m2.623s 00:07:32.429 user 0m2.286s 00:07:32.429 sys 0m0.137s 00:07:32.429 18:16:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.429 18:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.429 18:16:30 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.429 18:16:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:32.429 18:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.429 18:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.429 ************************************ 00:07:32.429 START TEST accel_decmop_full 00:07:32.429 ************************************ 00:07:32.429 18:16:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.429 18:16:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.429 18:16:30 -- accel/accel.sh@17 -- # local accel_module 00:07:32.429 18:16:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.429 18:16:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:32.429 18:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.429 18:16:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.429 18:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.429 18:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.429 18:16:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.429 18:16:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.429 18:16:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.429 18:16:30 -- accel/accel.sh@42 -- # jq -r . 00:07:32.429 [2024-11-17 18:16:30.404365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.429 [2024-11-17 18:16:30.404587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68643 ] 00:07:32.429 [2024-11-17 18:16:30.539073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.429 [2024-11-17 18:16:30.568619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.807 18:16:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.807 00:07:33.807 SPDK Configuration: 00:07:33.807 Core mask: 0x1 00:07:33.807 00:07:33.807 Accel Perf Configuration: 00:07:33.807 Workload Type: decompress 00:07:33.807 Transfer size: 111250 bytes 00:07:33.807 Vector count 1 00:07:33.807 Module: software 00:07:33.807 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.807 Queue depth: 32 00:07:33.807 Allocate depth: 32 00:07:33.807 # threads/core: 1 00:07:33.807 Run time: 1 seconds 00:07:33.807 Verify: Yes 00:07:33.807 00:07:33.807 Running for 1 seconds... 00:07:33.807 00:07:33.807 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.807 ------------------------------------------------------------------------------------ 00:07:33.807 0,0 5312/s 219 MiB/s 0 0 00:07:33.808 ==================================================================================== 00:07:33.808 Total 5312/s 563 MiB/s 0 0' 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:33.808 18:16:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:33.808 18:16:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.808 18:16:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.808 18:16:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.808 18:16:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.808 18:16:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.808 18:16:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.808 18:16:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.808 18:16:31 -- accel/accel.sh@42 -- # jq -r . 00:07:33.808 [2024-11-17 18:16:31.718698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.808 [2024-11-17 18:16:31.718806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68657 ] 00:07:33.808 [2024-11-17 18:16:31.852797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.808 [2024-11-17 18:16:31.882266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=0x1 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=decompress 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=software 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=32 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=32 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=1 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val=Yes 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:33.808 18:16:31 -- accel/accel.sh@21 -- # val= 00:07:33.808 18:16:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:33.808 18:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.745 18:16:33 -- accel/accel.sh@21 -- # val= 00:07:34.745 18:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:34.745 18:16:33 -- accel/accel.sh@21 -- # val= 00:07:34.745 18:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:34.745 18:16:33 -- accel/accel.sh@21 -- # val= 00:07:34.745 18:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:34.745 18:16:33 -- accel/accel.sh@21 -- # val= 00:07:34.745 18:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:34.745 18:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.004 18:16:33 -- accel/accel.sh@21 -- # val= 00:07:35.004 18:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.004 18:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.004 18:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.004 18:16:33 -- accel/accel.sh@21 -- # val= 00:07:35.004 18:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.004 18:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.004 18:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.004 ************************************ 00:07:35.004 END TEST accel_decmop_full 00:07:35.004 ************************************ 00:07:35.004 18:16:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.004 18:16:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:35.004 18:16:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.005 00:07:35.005 real 0m2.632s 00:07:35.005 user 0m2.311s 00:07:35.005 sys 0m0.122s 00:07:35.005 18:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.005 18:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:35.005 18:16:33 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.005 18:16:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:35.005 18:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.005 18:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:35.005 ************************************ 00:07:35.005 START TEST accel_decomp_mcore 00:07:35.005 ************************************ 00:07:35.005 18:16:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.005 18:16:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.005 18:16:33 -- accel/accel.sh@17 -- # local accel_module 00:07:35.005 18:16:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.005 18:16:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.005 18:16:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.005 18:16:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.005 18:16:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.005 18:16:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.005 18:16:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.005 18:16:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.005 18:16:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.005 18:16:33 -- accel/accel.sh@42 -- # jq -r . 00:07:35.005 [2024-11-17 18:16:33.082031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.005 [2024-11-17 18:16:33.082119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68692 ] 00:07:35.005 [2024-11-17 18:16:33.213220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.005 [2024-11-17 18:16:33.244961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.005 [2024-11-17 18:16:33.245078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.005 [2024-11-17 18:16:33.245198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.005 [2024-11-17 18:16:33.245199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.424 18:16:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.424 00:07:36.424 SPDK Configuration: 00:07:36.424 Core mask: 0xf 00:07:36.424 00:07:36.424 Accel Perf Configuration: 00:07:36.424 Workload Type: decompress 00:07:36.424 Transfer size: 4096 bytes 00:07:36.424 Vector count 1 00:07:36.424 Module: software 00:07:36.424 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.424 Queue depth: 32 00:07:36.424 Allocate depth: 32 00:07:36.424 # threads/core: 1 00:07:36.424 Run time: 1 seconds 00:07:36.424 Verify: Yes 00:07:36.424 00:07:36.424 Running for 1 seconds... 00:07:36.424 00:07:36.424 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.424 ------------------------------------------------------------------------------------ 00:07:36.424 0,0 64928/s 119 MiB/s 0 0 00:07:36.424 3,0 59168/s 109 MiB/s 0 0 00:07:36.424 2,0 60256/s 111 MiB/s 0 0 00:07:36.424 1,0 61856/s 113 MiB/s 0 0 00:07:36.424 ==================================================================================== 00:07:36.424 Total 246208/s 961 MiB/s 0 0' 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.424 18:16:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.424 18:16:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.424 18:16:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.424 18:16:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.424 18:16:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.424 18:16:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.424 18:16:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.424 18:16:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.424 18:16:34 -- accel/accel.sh@42 -- # jq -r . 00:07:36.424 [2024-11-17 18:16:34.386614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.424 [2024-11-17 18:16:34.387269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68714 ] 00:07:36.424 [2024-11-17 18:16:34.517999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.424 [2024-11-17 18:16:34.549451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.424 [2024-11-17 18:16:34.549526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.424 [2024-11-17 18:16:34.549665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.424 [2024-11-17 18:16:34.549667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val=0xf 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.424 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.424 18:16:34 -- accel/accel.sh@21 -- # val=decompress 00:07:36.424 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.424 18:16:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val=software 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val=32 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val=32 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val=1 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val=Yes 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.425 18:16:34 -- accel/accel.sh@21 -- # val= 00:07:36.425 18:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.425 18:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@21 -- # val= 00:07:37.807 18:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.807 18:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.807 18:16:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.807 18:16:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.807 18:16:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.807 00:07:37.807 real 0m2.625s 00:07:37.807 user 0m8.684s 00:07:37.807 sys 0m0.154s 00:07:37.807 ************************************ 00:07:37.807 END TEST accel_decomp_mcore 00:07:37.807 ************************************ 00:07:37.807 18:16:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.807 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:37.807 18:16:35 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.807 18:16:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:37.807 18:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.807 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:37.807 ************************************ 00:07:37.807 START TEST accel_decomp_full_mcore 00:07:37.807 ************************************ 00:07:37.807 18:16:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.807 18:16:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.807 18:16:35 -- accel/accel.sh@17 -- # local accel_module 00:07:37.807 18:16:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.807 18:16:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.807 18:16:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.807 18:16:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.807 18:16:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.807 18:16:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.807 18:16:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.807 18:16:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.807 18:16:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.807 18:16:35 -- accel/accel.sh@42 -- # jq -r . 00:07:37.807 [2024-11-17 18:16:35.755496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.807 [2024-11-17 18:16:35.755586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68746 ] 00:07:37.807 [2024-11-17 18:16:35.886470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.807 [2024-11-17 18:16:35.917742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.807 [2024-11-17 18:16:35.917886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.807 [2024-11-17 18:16:35.917998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.807 [2024-11-17 18:16:35.918247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.187 18:16:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.187 00:07:39.187 SPDK Configuration: 00:07:39.187 Core mask: 0xf 00:07:39.187 00:07:39.187 Accel Perf Configuration: 00:07:39.187 Workload Type: decompress 00:07:39.187 Transfer size: 111250 bytes 00:07:39.187 Vector count 1 00:07:39.187 Module: software 00:07:39.187 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.187 Queue depth: 32 00:07:39.187 Allocate depth: 32 00:07:39.187 # threads/core: 1 00:07:39.187 Run time: 1 seconds 00:07:39.187 Verify: Yes 00:07:39.187 00:07:39.187 Running for 1 seconds... 00:07:39.187 00:07:39.187 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.187 ------------------------------------------------------------------------------------ 00:07:39.187 0,0 4896/s 202 MiB/s 0 0 00:07:39.187 3,0 4864/s 200 MiB/s 0 0 00:07:39.187 2,0 4864/s 200 MiB/s 0 0 00:07:39.187 1,0 4896/s 202 MiB/s 0 0 00:07:39.187 ==================================================================================== 00:07:39.187 Total 19520/s 2070 MiB/s 0 0' 00:07:39.187 18:16:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.187 18:16:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.187 18:16:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.187 18:16:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.187 18:16:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.187 18:16:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.187 18:16:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.187 18:16:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.187 18:16:37 -- accel/accel.sh@42 -- # jq -r . 00:07:39.187 [2024-11-17 18:16:37.075859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.187 [2024-11-17 18:16:37.075954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68764 ] 00:07:39.187 [2024-11-17 18:16:37.211540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.187 [2024-11-17 18:16:37.252774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.187 [2024-11-17 18:16:37.252923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.187 [2024-11-17 18:16:37.253064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.187 [2024-11-17 18:16:37.253391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=0xf 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=decompress 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=software 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=32 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=32 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=1 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val=Yes 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.187 18:16:37 -- accel/accel.sh@21 -- # val= 00:07:39.187 18:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.187 18:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.568 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.568 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.568 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.569 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.569 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.569 18:16:38 -- accel/accel.sh@21 -- # val= 00:07:40.569 18:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.569 18:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.569 18:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.569 18:16:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.569 18:16:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.569 ************************************ 00:07:40.569 END TEST accel_decomp_full_mcore 00:07:40.569 ************************************ 00:07:40.569 18:16:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.569 00:07:40.569 real 0m2.681s 00:07:40.569 user 0m8.802s 00:07:40.569 sys 0m0.170s 00:07:40.569 18:16:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.569 18:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.569 18:16:38 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.569 18:16:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:40.569 18:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.569 18:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.569 ************************************ 00:07:40.569 START TEST accel_decomp_mthread 00:07:40.569 ************************************ 00:07:40.569 18:16:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.569 18:16:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.569 18:16:38 -- accel/accel.sh@17 -- # local accel_module 00:07:40.569 18:16:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.569 18:16:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:40.569 18:16:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.569 18:16:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.569 18:16:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.569 18:16:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.569 18:16:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.569 18:16:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.569 18:16:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.569 18:16:38 -- accel/accel.sh@42 -- # jq -r . 00:07:40.569 [2024-11-17 18:16:38.489957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.569 [2024-11-17 18:16:38.490048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68806 ] 00:07:40.569 [2024-11-17 18:16:38.624941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.569 [2024-11-17 18:16:38.654748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.514 18:16:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.514 00:07:41.514 SPDK Configuration: 00:07:41.514 Core mask: 0x1 00:07:41.514 00:07:41.514 Accel Perf Configuration: 00:07:41.514 Workload Type: decompress 00:07:41.514 Transfer size: 4096 bytes 00:07:41.514 Vector count 1 00:07:41.514 Module: software 00:07:41.514 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.514 Queue depth: 32 00:07:41.514 Allocate depth: 32 00:07:41.514 # threads/core: 2 00:07:41.514 Run time: 1 seconds 00:07:41.514 Verify: Yes 00:07:41.514 00:07:41.514 Running for 1 seconds... 00:07:41.514 00:07:41.514 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.514 ------------------------------------------------------------------------------------ 00:07:41.514 0,1 40064/s 73 MiB/s 0 0 00:07:41.514 0,0 39968/s 73 MiB/s 0 0 00:07:41.514 ==================================================================================== 00:07:41.514 Total 80032/s 312 MiB/s 0 0' 00:07:41.514 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.514 18:16:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.514 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.514 18:16:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:41.514 18:16:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.514 18:16:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.514 18:16:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.514 18:16:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.514 18:16:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.514 18:16:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.514 18:16:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.514 18:16:39 -- accel/accel.sh@42 -- # jq -r . 00:07:41.774 [2024-11-17 18:16:39.798611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.774 [2024-11-17 18:16:39.798717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68820 ] 00:07:41.774 [2024-11-17 18:16:39.934564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.774 [2024-11-17 18:16:39.964034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val=0x1 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val=decompress 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val=software 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.774 18:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val=32 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val=32 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val=2 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val=Yes 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:41.774 18:16:40 -- accel/accel.sh@21 -- # val= 00:07:41.774 18:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:41.774 18:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@21 -- # val= 00:07:43.153 18:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.153 18:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.153 18:16:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.153 18:16:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.153 18:16:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.153 00:07:43.153 real 0m2.622s 00:07:43.153 user 0m2.270s 00:07:43.153 sys 0m0.145s 00:07:43.153 18:16:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.153 18:16:41 -- common/autotest_common.sh@10 -- # set +x 00:07:43.153 ************************************ 00:07:43.153 END TEST accel_decomp_mthread 00:07:43.153 ************************************ 00:07:43.153 18:16:41 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.153 18:16:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.153 18:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.153 18:16:41 -- common/autotest_common.sh@10 -- # set +x 00:07:43.153 ************************************ 00:07:43.153 START TEST accel_deomp_full_mthread 00:07:43.153 ************************************ 00:07:43.153 18:16:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.153 18:16:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.153 18:16:41 -- accel/accel.sh@17 -- # local accel_module 00:07:43.153 18:16:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.153 18:16:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.153 18:16:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.153 18:16:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.153 18:16:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.153 18:16:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.153 18:16:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.153 18:16:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.153 18:16:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.153 18:16:41 -- accel/accel.sh@42 -- # jq -r . 00:07:43.153 [2024-11-17 18:16:41.163539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.153 [2024-11-17 18:16:41.163640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68854 ] 00:07:43.153 [2024-11-17 18:16:41.298641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.153 [2024-11-17 18:16:41.331757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.533 18:16:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.533 00:07:44.533 SPDK Configuration: 00:07:44.533 Core mask: 0x1 00:07:44.533 00:07:44.533 Accel Perf Configuration: 00:07:44.533 Workload Type: decompress 00:07:44.533 Transfer size: 111250 bytes 00:07:44.533 Vector count 1 00:07:44.533 Module: software 00:07:44.533 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.533 Queue depth: 32 00:07:44.533 Allocate depth: 32 00:07:44.533 # threads/core: 2 00:07:44.533 Run time: 1 seconds 00:07:44.533 Verify: Yes 00:07:44.533 00:07:44.533 Running for 1 seconds... 00:07:44.533 00:07:44.533 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.533 ------------------------------------------------------------------------------------ 00:07:44.533 0,1 2560/s 105 MiB/s 0 0 00:07:44.534 0,0 2528/s 104 MiB/s 0 0 00:07:44.534 ==================================================================================== 00:07:44.534 Total 5088/s 539 MiB/s 0 0' 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.534 18:16:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.534 18:16:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.534 18:16:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.534 18:16:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.534 18:16:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.534 18:16:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.534 18:16:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.534 18:16:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.534 18:16:42 -- accel/accel.sh@42 -- # jq -r . 00:07:44.534 [2024-11-17 18:16:42.498789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.534 [2024-11-17 18:16:42.498873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68869 ] 00:07:44.534 [2024-11-17 18:16:42.632737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.534 [2024-11-17 18:16:42.662875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=0x1 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=decompress 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=software 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=32 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=32 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=2 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val=Yes 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:44.534 18:16:42 -- accel/accel.sh@21 -- # val= 00:07:44.534 18:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:44.534 18:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@21 -- # val= 00:07:45.913 18:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.913 18:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.913 18:16:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.913 18:16:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.913 18:16:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.913 00:07:45.913 real 0m2.668s 00:07:45.913 user 0m2.324s 00:07:45.913 sys 0m0.143s 00:07:45.913 18:16:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.913 18:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.913 ************************************ 00:07:45.913 END TEST accel_deomp_full_mthread 00:07:45.913 ************************************ 00:07:45.913 18:16:43 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:45.913 18:16:43 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.913 18:16:43 -- accel/accel.sh@129 -- # build_accel_config 00:07:45.913 18:16:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.913 18:16:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.913 18:16:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.913 18:16:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.913 18:16:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.913 18:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.913 18:16:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.913 18:16:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.913 18:16:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.913 18:16:43 -- accel/accel.sh@42 -- # jq -r . 00:07:45.913 ************************************ 00:07:45.913 START TEST accel_dif_functional_tests 00:07:45.913 ************************************ 00:07:45.913 18:16:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.913 [2024-11-17 18:16:43.906339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.913 [2024-11-17 18:16:43.906456] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68904 ] 00:07:45.913 [2024-11-17 18:16:44.037726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.913 [2024-11-17 18:16:44.068899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.913 [2024-11-17 18:16:44.069033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.913 [2024-11-17 18:16:44.069036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.913 00:07:45.913 00:07:45.913 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.913 http://cunit.sourceforge.net/ 00:07:45.913 00:07:45.913 00:07:45.913 Suite: accel_dif 00:07:45.913 Test: verify: DIF generated, GUARD check ...passed 00:07:45.913 Test: verify: DIF generated, APPTAG check ...passed 00:07:45.913 Test: verify: DIF generated, REFTAG check ...passed 00:07:45.913 Test: verify: DIF not generated, GUARD check ...passed 00:07:45.913 Test: verify: DIF not generated, APPTAG check ...[2024-11-17 18:16:44.113839] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:45.913 [2024-11-17 18:16:44.113996] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:45.913 [2024-11-17 18:16:44.114037] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:45.913 passed 00:07:45.913 Test: verify: DIF not generated, REFTAG check ...[2024-11-17 18:16:44.114065] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:45.913 [2024-11-17 18:16:44.114091] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:45.913 passed 00:07:45.913 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:45.913 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-17 18:16:44.114241] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:45.913 [2024-11-17 18:16:44.114328] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:45.913 passed 00:07:45.914 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:45.914 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:45.914 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:45.914 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:45.914 Test: generate copy: DIF generated, GUARD check ...passed 00:07:45.914 Test: generate copy: DIF generated, APTTAG check ...passed[2024-11-17 18:16:44.114624] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:45.914 00:07:45.914 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:45.914 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:45.914 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:45.914 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:45.914 Test: generate copy: iovecs-len validate ...passed 00:07:45.914 Test: generate copy: buffer alignment validate ...[2024-11-17 18:16:44.115094] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:45.914 passed 00:07:45.914 00:07:45.914 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.914 suites 1 1 n/a 0 0 00:07:45.914 tests 20 20 20 0 0 00:07:45.914 asserts 204 204 204 0 n/a 00:07:45.914 00:07:45.914 Elapsed time = 0.003 seconds 00:07:46.172 00:07:46.172 real 0m0.369s 00:07:46.172 user 0m0.430s 00:07:46.172 sys 0m0.086s 00:07:46.173 18:16:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.173 ************************************ 00:07:46.173 END TEST accel_dif_functional_tests 00:07:46.173 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.173 ************************************ 00:07:46.173 00:07:46.173 real 0m56.325s 00:07:46.173 user 1m1.663s 00:07:46.173 sys 0m4.036s 00:07:46.173 18:16:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.173 ************************************ 00:07:46.173 END TEST accel 00:07:46.173 ************************************ 00:07:46.173 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.173 18:16:44 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:46.173 18:16:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.173 18:16:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.173 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.173 ************************************ 00:07:46.173 START TEST accel_rpc 00:07:46.173 ************************************ 00:07:46.173 18:16:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:46.173 * Looking for test storage... 00:07:46.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:46.173 18:16:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:46.173 18:16:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:46.173 18:16:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:46.432 18:16:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:46.432 18:16:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:46.432 18:16:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:46.432 18:16:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:46.432 18:16:44 -- scripts/common.sh@335 -- # IFS=.-: 00:07:46.432 18:16:44 -- scripts/common.sh@335 -- # read -ra ver1 00:07:46.432 18:16:44 -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.432 18:16:44 -- scripts/common.sh@336 -- # read -ra ver2 00:07:46.432 18:16:44 -- scripts/common.sh@337 -- # local 'op=<' 00:07:46.432 18:16:44 -- scripts/common.sh@339 -- # ver1_l=2 00:07:46.432 18:16:44 -- scripts/common.sh@340 -- # ver2_l=1 00:07:46.432 18:16:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:46.432 18:16:44 -- scripts/common.sh@343 -- # case "$op" in 00:07:46.432 18:16:44 -- scripts/common.sh@344 -- # : 1 00:07:46.432 18:16:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:46.432 18:16:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.432 18:16:44 -- scripts/common.sh@364 -- # decimal 1 00:07:46.432 18:16:44 -- scripts/common.sh@352 -- # local d=1 00:07:46.432 18:16:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.432 18:16:44 -- scripts/common.sh@354 -- # echo 1 00:07:46.432 18:16:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:46.432 18:16:44 -- scripts/common.sh@365 -- # decimal 2 00:07:46.432 18:16:44 -- scripts/common.sh@352 -- # local d=2 00:07:46.432 18:16:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.432 18:16:44 -- scripts/common.sh@354 -- # echo 2 00:07:46.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.432 18:16:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:46.432 18:16:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:46.432 18:16:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:46.432 18:16:44 -- scripts/common.sh@367 -- # return 0 00:07:46.432 18:16:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.432 18:16:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.432 --rc genhtml_branch_coverage=1 00:07:46.432 --rc genhtml_function_coverage=1 00:07:46.432 --rc genhtml_legend=1 00:07:46.432 --rc geninfo_all_blocks=1 00:07:46.432 --rc geninfo_unexecuted_blocks=1 00:07:46.432 00:07:46.432 ' 00:07:46.432 18:16:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.432 --rc genhtml_branch_coverage=1 00:07:46.432 --rc genhtml_function_coverage=1 00:07:46.432 --rc genhtml_legend=1 00:07:46.432 --rc geninfo_all_blocks=1 00:07:46.432 --rc geninfo_unexecuted_blocks=1 00:07:46.432 00:07:46.432 ' 00:07:46.432 18:16:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.432 --rc genhtml_branch_coverage=1 00:07:46.432 --rc genhtml_function_coverage=1 00:07:46.432 --rc genhtml_legend=1 00:07:46.432 --rc geninfo_all_blocks=1 00:07:46.432 --rc geninfo_unexecuted_blocks=1 00:07:46.432 00:07:46.432 ' 00:07:46.432 18:16:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.432 --rc genhtml_branch_coverage=1 00:07:46.432 --rc genhtml_function_coverage=1 00:07:46.432 --rc genhtml_legend=1 00:07:46.432 --rc geninfo_all_blocks=1 00:07:46.432 --rc geninfo_unexecuted_blocks=1 00:07:46.432 00:07:46.432 ' 00:07:46.432 18:16:44 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:46.432 18:16:44 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68976 00:07:46.432 18:16:44 -- accel/accel_rpc.sh@15 -- # waitforlisten 68976 00:07:46.432 18:16:44 -- common/autotest_common.sh@829 -- # '[' -z 68976 ']' 00:07:46.432 18:16:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.432 18:16:44 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:46.432 18:16:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.432 18:16:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.432 18:16:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.432 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.432 [2024-11-17 18:16:44.567754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.432 [2024-11-17 18:16:44.568058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68976 ] 00:07:46.692 [2024-11-17 18:16:44.705012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.692 [2024-11-17 18:16:44.737129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.692 [2024-11-17 18:16:44.737574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.692 18:16:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.692 18:16:44 -- common/autotest_common.sh@862 -- # return 0 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:46.692 18:16:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.692 18:16:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.692 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.692 ************************************ 00:07:46.692 START TEST accel_assign_opcode 00:07:46.692 ************************************ 00:07:46.692 18:16:44 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:46.692 18:16:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.692 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.692 [2024-11-17 18:16:44.834083] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:46.692 18:16:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:46.692 18:16:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.692 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.692 [2024-11-17 18:16:44.842080] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:46.692 18:16:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.692 18:16:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:46.692 18:16:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.692 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.951 18:16:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.951 18:16:44 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:46.951 18:16:44 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:46.951 18:16:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.951 18:16:44 -- accel/accel_rpc.sh@42 -- # grep software 00:07:46.951 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.951 18:16:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.951 software 00:07:46.951 ************************************ 00:07:46.951 END TEST accel_assign_opcode 00:07:46.951 ************************************ 00:07:46.951 00:07:46.951 real 0m0.186s 00:07:46.951 user 0m0.059s 00:07:46.951 sys 0m0.009s 00:07:46.951 18:16:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.951 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:46.951 18:16:45 -- accel/accel_rpc.sh@55 -- # killprocess 68976 00:07:46.951 18:16:45 -- common/autotest_common.sh@936 -- # '[' -z 68976 ']' 00:07:46.951 18:16:45 -- common/autotest_common.sh@940 -- # kill -0 68976 00:07:46.951 18:16:45 -- common/autotest_common.sh@941 -- # uname 00:07:46.951 18:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:46.951 18:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68976 00:07:46.951 killing process with pid 68976 00:07:46.951 18:16:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:46.951 18:16:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:46.951 18:16:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68976' 00:07:46.951 18:16:45 -- common/autotest_common.sh@955 -- # kill 68976 00:07:46.951 18:16:45 -- common/autotest_common.sh@960 -- # wait 68976 00:07:47.211 00:07:47.211 real 0m0.974s 00:07:47.211 user 0m0.990s 00:07:47.211 sys 0m0.313s 00:07:47.211 18:16:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.211 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:47.211 ************************************ 00:07:47.211 END TEST accel_rpc 00:07:47.211 ************************************ 00:07:47.211 18:16:45 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.211 18:16:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.211 18:16:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.211 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:47.211 ************************************ 00:07:47.211 START TEST app_cmdline 00:07:47.211 ************************************ 00:07:47.211 18:16:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.211 * Looking for test storage... 00:07:47.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.211 18:16:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.211 18:16:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.211 18:16:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.471 18:16:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.471 18:16:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.471 18:16:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.471 18:16:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.471 18:16:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.471 18:16:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.471 18:16:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.471 18:16:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.471 18:16:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.471 18:16:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.471 18:16:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.471 18:16:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.471 18:16:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.471 18:16:45 -- scripts/common.sh@344 -- # : 1 00:07:47.471 18:16:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.471 18:16:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.471 18:16:45 -- scripts/common.sh@364 -- # decimal 1 00:07:47.471 18:16:45 -- scripts/common.sh@352 -- # local d=1 00:07:47.471 18:16:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.471 18:16:45 -- scripts/common.sh@354 -- # echo 1 00:07:47.471 18:16:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.471 18:16:45 -- scripts/common.sh@365 -- # decimal 2 00:07:47.471 18:16:45 -- scripts/common.sh@352 -- # local d=2 00:07:47.471 18:16:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.471 18:16:45 -- scripts/common.sh@354 -- # echo 2 00:07:47.471 18:16:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.471 18:16:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.471 18:16:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.471 18:16:45 -- scripts/common.sh@367 -- # return 0 00:07:47.471 18:16:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.471 18:16:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.471 --rc genhtml_branch_coverage=1 00:07:47.471 --rc genhtml_function_coverage=1 00:07:47.471 --rc genhtml_legend=1 00:07:47.471 --rc geninfo_all_blocks=1 00:07:47.471 --rc geninfo_unexecuted_blocks=1 00:07:47.471 00:07:47.471 ' 00:07:47.471 18:16:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.471 --rc genhtml_branch_coverage=1 00:07:47.471 --rc genhtml_function_coverage=1 00:07:47.471 --rc genhtml_legend=1 00:07:47.471 --rc geninfo_all_blocks=1 00:07:47.471 --rc geninfo_unexecuted_blocks=1 00:07:47.471 00:07:47.471 ' 00:07:47.471 18:16:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.471 --rc genhtml_branch_coverage=1 00:07:47.471 --rc genhtml_function_coverage=1 00:07:47.471 --rc genhtml_legend=1 00:07:47.471 --rc geninfo_all_blocks=1 00:07:47.471 --rc geninfo_unexecuted_blocks=1 00:07:47.471 00:07:47.471 ' 00:07:47.471 18:16:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.471 --rc genhtml_branch_coverage=1 00:07:47.471 --rc genhtml_function_coverage=1 00:07:47.471 --rc genhtml_legend=1 00:07:47.471 --rc geninfo_all_blocks=1 00:07:47.471 --rc geninfo_unexecuted_blocks=1 00:07:47.471 00:07:47.471 ' 00:07:47.471 18:16:45 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:47.471 18:16:45 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69063 00:07:47.471 18:16:45 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:47.471 18:16:45 -- app/cmdline.sh@18 -- # waitforlisten 69063 00:07:47.471 18:16:45 -- common/autotest_common.sh@829 -- # '[' -z 69063 ']' 00:07:47.471 18:16:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.471 18:16:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.471 18:16:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.471 18:16:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.471 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:47.471 [2024-11-17 18:16:45.610239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.471 [2024-11-17 18:16:45.610682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69063 ] 00:07:47.730 [2024-11-17 18:16:45.756581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.731 [2024-11-17 18:16:45.788879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.731 [2024-11-17 18:16:45.789255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.668 18:16:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.668 18:16:46 -- common/autotest_common.sh@862 -- # return 0 00:07:48.668 18:16:46 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:48.668 { 00:07:48.668 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:48.668 "fields": { 00:07:48.668 "major": 24, 00:07:48.668 "minor": 1, 00:07:48.668 "patch": 1, 00:07:48.668 "suffix": "-pre", 00:07:48.668 "commit": "c13c99a5e" 00:07:48.668 } 00:07:48.668 } 00:07:48.668 18:16:46 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:48.668 18:16:46 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:48.668 18:16:46 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:48.668 18:16:46 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:48.668 18:16:46 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:48.668 18:16:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.668 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:07:48.668 18:16:46 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:48.668 18:16:46 -- app/cmdline.sh@26 -- # sort 00:07:48.668 18:16:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.928 18:16:46 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:48.928 18:16:46 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:48.928 18:16:46 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.928 18:16:46 -- common/autotest_common.sh@650 -- # local es=0 00:07:48.928 18:16:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.928 18:16:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.928 18:16:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.928 18:16:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.928 18:16:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.928 18:16:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.928 18:16:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.928 18:16:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.928 18:16:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:48.928 18:16:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.928 request: 00:07:48.928 { 00:07:48.928 "method": "env_dpdk_get_mem_stats", 00:07:48.928 "req_id": 1 00:07:48.928 } 00:07:48.928 Got JSON-RPC error response 00:07:48.928 response: 00:07:48.928 { 00:07:48.928 "code": -32601, 00:07:48.928 "message": "Method not found" 00:07:48.928 } 00:07:49.187 18:16:47 -- common/autotest_common.sh@653 -- # es=1 00:07:49.187 18:16:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.187 18:16:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.187 18:16:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.187 18:16:47 -- app/cmdline.sh@1 -- # killprocess 69063 00:07:49.187 18:16:47 -- common/autotest_common.sh@936 -- # '[' -z 69063 ']' 00:07:49.187 18:16:47 -- common/autotest_common.sh@940 -- # kill -0 69063 00:07:49.187 18:16:47 -- common/autotest_common.sh@941 -- # uname 00:07:49.187 18:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.187 18:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69063 00:07:49.187 killing process with pid 69063 00:07:49.187 18:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.187 18:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.187 18:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69063' 00:07:49.187 18:16:47 -- common/autotest_common.sh@955 -- # kill 69063 00:07:49.187 18:16:47 -- common/autotest_common.sh@960 -- # wait 69063 00:07:49.187 00:07:49.187 real 0m2.091s 00:07:49.187 user 0m2.777s 00:07:49.187 sys 0m0.372s 00:07:49.187 18:16:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.187 18:16:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.187 ************************************ 00:07:49.187 END TEST app_cmdline 00:07:49.187 ************************************ 00:07:49.447 18:16:47 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.447 18:16:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.447 18:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.447 18:16:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.447 ************************************ 00:07:49.447 START TEST version 00:07:49.447 ************************************ 00:07:49.447 18:16:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.447 * Looking for test storage... 00:07:49.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:49.447 18:16:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.447 18:16:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.447 18:16:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.447 18:16:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.447 18:16:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.447 18:16:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.447 18:16:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.447 18:16:47 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.447 18:16:47 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.447 18:16:47 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.447 18:16:47 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.447 18:16:47 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.447 18:16:47 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.447 18:16:47 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.447 18:16:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.447 18:16:47 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.447 18:16:47 -- scripts/common.sh@344 -- # : 1 00:07:49.447 18:16:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.447 18:16:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.447 18:16:47 -- scripts/common.sh@364 -- # decimal 1 00:07:49.447 18:16:47 -- scripts/common.sh@352 -- # local d=1 00:07:49.447 18:16:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.447 18:16:47 -- scripts/common.sh@354 -- # echo 1 00:07:49.447 18:16:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.447 18:16:47 -- scripts/common.sh@365 -- # decimal 2 00:07:49.447 18:16:47 -- scripts/common.sh@352 -- # local d=2 00:07:49.447 18:16:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.447 18:16:47 -- scripts/common.sh@354 -- # echo 2 00:07:49.447 18:16:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.447 18:16:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.447 18:16:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.447 18:16:47 -- scripts/common.sh@367 -- # return 0 00:07:49.447 18:16:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.447 18:16:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.447 --rc genhtml_branch_coverage=1 00:07:49.447 --rc genhtml_function_coverage=1 00:07:49.447 --rc genhtml_legend=1 00:07:49.447 --rc geninfo_all_blocks=1 00:07:49.447 --rc geninfo_unexecuted_blocks=1 00:07:49.447 00:07:49.447 ' 00:07:49.447 18:16:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.447 --rc genhtml_branch_coverage=1 00:07:49.447 --rc genhtml_function_coverage=1 00:07:49.447 --rc genhtml_legend=1 00:07:49.447 --rc geninfo_all_blocks=1 00:07:49.447 --rc geninfo_unexecuted_blocks=1 00:07:49.447 00:07:49.447 ' 00:07:49.447 18:16:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.447 --rc genhtml_branch_coverage=1 00:07:49.447 --rc genhtml_function_coverage=1 00:07:49.447 --rc genhtml_legend=1 00:07:49.447 --rc geninfo_all_blocks=1 00:07:49.447 --rc geninfo_unexecuted_blocks=1 00:07:49.447 00:07:49.447 ' 00:07:49.447 18:16:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.447 --rc genhtml_branch_coverage=1 00:07:49.447 --rc genhtml_function_coverage=1 00:07:49.447 --rc genhtml_legend=1 00:07:49.447 --rc geninfo_all_blocks=1 00:07:49.447 --rc geninfo_unexecuted_blocks=1 00:07:49.447 00:07:49.447 ' 00:07:49.447 18:16:47 -- app/version.sh@17 -- # get_header_version major 00:07:49.447 18:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.447 18:16:47 -- app/version.sh@14 -- # cut -f2 00:07:49.447 18:16:47 -- app/version.sh@14 -- # tr -d '"' 00:07:49.447 18:16:47 -- app/version.sh@17 -- # major=24 00:07:49.447 18:16:47 -- app/version.sh@18 -- # get_header_version minor 00:07:49.447 18:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.447 18:16:47 -- app/version.sh@14 -- # tr -d '"' 00:07:49.447 18:16:47 -- app/version.sh@14 -- # cut -f2 00:07:49.447 18:16:47 -- app/version.sh@18 -- # minor=1 00:07:49.447 18:16:47 -- app/version.sh@19 -- # get_header_version patch 00:07:49.447 18:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.447 18:16:47 -- app/version.sh@14 -- # cut -f2 00:07:49.447 18:16:47 -- app/version.sh@14 -- # tr -d '"' 00:07:49.447 18:16:47 -- app/version.sh@19 -- # patch=1 00:07:49.447 18:16:47 -- app/version.sh@20 -- # get_header_version suffix 00:07:49.447 18:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.447 18:16:47 -- app/version.sh@14 -- # cut -f2 00:07:49.447 18:16:47 -- app/version.sh@14 -- # tr -d '"' 00:07:49.447 18:16:47 -- app/version.sh@20 -- # suffix=-pre 00:07:49.447 18:16:47 -- app/version.sh@22 -- # version=24.1 00:07:49.447 18:16:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:49.447 18:16:47 -- app/version.sh@25 -- # version=24.1.1 00:07:49.447 18:16:47 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:49.448 18:16:47 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:49.448 18:16:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:49.707 18:16:47 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:49.707 18:16:47 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:49.707 00:07:49.707 real 0m0.224s 00:07:49.707 user 0m0.133s 00:07:49.707 sys 0m0.124s 00:07:49.707 ************************************ 00:07:49.707 END TEST version 00:07:49.707 ************************************ 00:07:49.707 18:16:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.707 18:16:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.707 18:16:47 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:49.707 18:16:47 -- spdk/autotest.sh@191 -- # uname -s 00:07:49.707 18:16:47 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:49.707 18:16:47 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:49.707 18:16:47 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:49.707 18:16:47 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:49.707 18:16:47 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:49.707 18:16:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.707 18:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.707 18:16:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.707 ************************************ 00:07:49.707 START TEST spdk_dd 00:07:49.707 ************************************ 00:07:49.707 18:16:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:49.707 * Looking for test storage... 00:07:49.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.707 18:16:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.707 18:16:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.707 18:16:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.707 18:16:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.708 18:16:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.708 18:16:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.708 18:16:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.708 18:16:47 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.708 18:16:47 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.708 18:16:47 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.708 18:16:47 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.708 18:16:47 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.708 18:16:47 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.708 18:16:47 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.708 18:16:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.708 18:16:47 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.708 18:16:47 -- scripts/common.sh@344 -- # : 1 00:07:49.708 18:16:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.708 18:16:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.708 18:16:47 -- scripts/common.sh@364 -- # decimal 1 00:07:49.708 18:16:47 -- scripts/common.sh@352 -- # local d=1 00:07:49.708 18:16:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.708 18:16:47 -- scripts/common.sh@354 -- # echo 1 00:07:49.708 18:16:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.708 18:16:47 -- scripts/common.sh@365 -- # decimal 2 00:07:49.708 18:16:47 -- scripts/common.sh@352 -- # local d=2 00:07:49.708 18:16:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.708 18:16:47 -- scripts/common.sh@354 -- # echo 2 00:07:49.708 18:16:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.708 18:16:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.708 18:16:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.708 18:16:47 -- scripts/common.sh@367 -- # return 0 00:07:49.708 18:16:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.708 18:16:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 18:16:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 18:16:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 18:16:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 18:16:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.708 18:16:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.708 18:16:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.708 18:16:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.708 18:16:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 18:16:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 18:16:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 18:16:47 -- paths/export.sh@5 -- # export PATH 00:07:49.708 18:16:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 18:16:47 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.279 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:50.279 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:50.279 18:16:48 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:50.279 18:16:48 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:50.279 18:16:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:50.279 18:16:48 -- scripts/common.sh@312 -- # local nvmes 00:07:50.279 18:16:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:50.279 18:16:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:50.279 18:16:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:50.279 18:16:48 -- scripts/common.sh@297 -- # local bdf= 00:07:50.279 18:16:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:50.279 18:16:48 -- scripts/common.sh@232 -- # local class 00:07:50.279 18:16:48 -- scripts/common.sh@233 -- # local subclass 00:07:50.279 18:16:48 -- scripts/common.sh@234 -- # local progif 00:07:50.279 18:16:48 -- scripts/common.sh@235 -- # printf %02x 1 00:07:50.279 18:16:48 -- scripts/common.sh@235 -- # class=01 00:07:50.279 18:16:48 -- scripts/common.sh@236 -- # printf %02x 8 00:07:50.279 18:16:48 -- scripts/common.sh@236 -- # subclass=08 00:07:50.279 18:16:48 -- scripts/common.sh@237 -- # printf %02x 2 00:07:50.279 18:16:48 -- scripts/common.sh@237 -- # progif=02 00:07:50.279 18:16:48 -- scripts/common.sh@239 -- # hash lspci 00:07:50.279 18:16:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:50.279 18:16:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:50.279 18:16:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:50.279 18:16:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:50.279 18:16:48 -- scripts/common.sh@244 -- # tr -d '"' 00:07:50.279 18:16:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:50.279 18:16:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:50.279 18:16:48 -- scripts/common.sh@15 -- # local i 00:07:50.279 18:16:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:50.279 18:16:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:50.279 18:16:48 -- scripts/common.sh@24 -- # return 0 00:07:50.279 18:16:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:50.279 18:16:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:50.279 18:16:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:50.279 18:16:48 -- scripts/common.sh@15 -- # local i 00:07:50.279 18:16:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:50.279 18:16:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:50.279 18:16:48 -- scripts/common.sh@24 -- # return 0 00:07:50.279 18:16:48 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:50.279 18:16:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:50.279 18:16:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:50.279 18:16:48 -- scripts/common.sh@322 -- # uname -s 00:07:50.279 18:16:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:50.279 18:16:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:50.279 18:16:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:50.280 18:16:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:50.280 18:16:48 -- scripts/common.sh@322 -- # uname -s 00:07:50.280 18:16:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:50.280 18:16:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:50.280 18:16:48 -- scripts/common.sh@327 -- # (( 2 )) 00:07:50.280 18:16:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:50.280 18:16:48 -- dd/dd.sh@13 -- # check_liburing 00:07:50.280 18:16:48 -- dd/common.sh@139 -- # local lib so 00:07:50.280 18:16:48 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:50.280 18:16:48 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.280 18:16:48 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:50.280 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.281 18:16:48 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:50.281 18:16:48 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:50.281 * spdk_dd linked to liburing 00:07:50.281 18:16:48 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:50.281 18:16:48 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:50.281 18:16:48 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.281 18:16:48 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.281 18:16:48 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.281 18:16:48 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.281 18:16:48 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:50.281 18:16:48 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.281 18:16:48 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.281 18:16:48 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.281 18:16:48 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.281 18:16:48 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.281 18:16:48 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.281 18:16:48 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.281 18:16:48 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.281 18:16:48 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.281 18:16:48 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.281 18:16:48 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.281 18:16:48 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:50.281 18:16:48 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.281 18:16:48 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:50.281 18:16:48 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:50.281 18:16:48 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.281 18:16:48 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:50.281 18:16:48 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.281 18:16:48 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:50.281 18:16:48 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.281 18:16:48 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.281 18:16:48 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.281 18:16:48 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:50.281 18:16:48 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.281 18:16:48 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:50.281 18:16:48 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:50.281 18:16:48 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:50.281 18:16:48 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:50.281 18:16:48 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:50.281 18:16:48 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:50.281 18:16:48 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:50.281 18:16:48 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:50.281 18:16:48 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:50.281 18:16:48 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:50.281 18:16:48 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:50.281 18:16:48 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:50.281 18:16:48 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:50.281 18:16:48 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:50.281 18:16:48 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.281 18:16:48 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:50.281 18:16:48 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:50.281 18:16:48 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:50.281 18:16:48 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.281 18:16:48 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:50.281 18:16:48 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:50.281 18:16:48 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:50.281 18:16:48 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:50.281 18:16:48 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:50.281 18:16:48 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:50.281 18:16:48 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.281 18:16:48 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:50.281 18:16:48 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.281 18:16:48 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:50.281 18:16:48 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:50.281 18:16:48 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:50.281 18:16:48 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:50.281 18:16:48 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:50.281 18:16:48 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:50.281 18:16:48 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:50.281 18:16:48 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:50.281 18:16:48 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.281 18:16:48 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:50.281 18:16:48 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:50.281 18:16:48 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:50.281 18:16:48 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:50.281 18:16:48 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:50.281 18:16:48 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:50.281 18:16:48 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.281 18:16:48 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:50.281 18:16:48 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:50.281 18:16:48 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:50.281 18:16:48 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.281 18:16:48 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:50.281 18:16:48 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:50.281 18:16:48 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:50.281 18:16:48 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:50.281 18:16:48 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:50.281 18:16:48 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:50.281 18:16:48 -- dd/common.sh@157 -- # return 0 00:07:50.281 18:16:48 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:50.281 18:16:48 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:50.281 18:16:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.281 18:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.281 18:16:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.281 ************************************ 00:07:50.281 START TEST spdk_dd_basic_rw 00:07:50.281 ************************************ 00:07:50.281 18:16:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:50.281 * Looking for test storage... 00:07:50.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:50.542 18:16:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:50.542 18:16:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:50.542 18:16:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:50.542 18:16:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:50.542 18:16:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:50.542 18:16:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:50.542 18:16:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:50.542 18:16:48 -- scripts/common.sh@335 -- # IFS=.-: 00:07:50.542 18:16:48 -- scripts/common.sh@335 -- # read -ra ver1 00:07:50.542 18:16:48 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.542 18:16:48 -- scripts/common.sh@336 -- # read -ra ver2 00:07:50.542 18:16:48 -- scripts/common.sh@337 -- # local 'op=<' 00:07:50.542 18:16:48 -- scripts/common.sh@339 -- # ver1_l=2 00:07:50.542 18:16:48 -- scripts/common.sh@340 -- # ver2_l=1 00:07:50.542 18:16:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:50.542 18:16:48 -- scripts/common.sh@343 -- # case "$op" in 00:07:50.542 18:16:48 -- scripts/common.sh@344 -- # : 1 00:07:50.542 18:16:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:50.542 18:16:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.542 18:16:48 -- scripts/common.sh@364 -- # decimal 1 00:07:50.542 18:16:48 -- scripts/common.sh@352 -- # local d=1 00:07:50.542 18:16:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.542 18:16:48 -- scripts/common.sh@354 -- # echo 1 00:07:50.542 18:16:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:50.542 18:16:48 -- scripts/common.sh@365 -- # decimal 2 00:07:50.542 18:16:48 -- scripts/common.sh@352 -- # local d=2 00:07:50.542 18:16:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.542 18:16:48 -- scripts/common.sh@354 -- # echo 2 00:07:50.542 18:16:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:50.542 18:16:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:50.542 18:16:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:50.542 18:16:48 -- scripts/common.sh@367 -- # return 0 00:07:50.542 18:16:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.542 18:16:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.542 --rc genhtml_branch_coverage=1 00:07:50.542 --rc genhtml_function_coverage=1 00:07:50.542 --rc genhtml_legend=1 00:07:50.542 --rc geninfo_all_blocks=1 00:07:50.542 --rc geninfo_unexecuted_blocks=1 00:07:50.542 00:07:50.542 ' 00:07:50.542 18:16:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.542 --rc genhtml_branch_coverage=1 00:07:50.542 --rc genhtml_function_coverage=1 00:07:50.542 --rc genhtml_legend=1 00:07:50.542 --rc geninfo_all_blocks=1 00:07:50.542 --rc geninfo_unexecuted_blocks=1 00:07:50.542 00:07:50.542 ' 00:07:50.542 18:16:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.542 --rc genhtml_branch_coverage=1 00:07:50.542 --rc genhtml_function_coverage=1 00:07:50.542 --rc genhtml_legend=1 00:07:50.542 --rc geninfo_all_blocks=1 00:07:50.542 --rc geninfo_unexecuted_blocks=1 00:07:50.542 00:07:50.542 ' 00:07:50.542 18:16:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.542 --rc genhtml_branch_coverage=1 00:07:50.542 --rc genhtml_function_coverage=1 00:07:50.542 --rc genhtml_legend=1 00:07:50.542 --rc geninfo_all_blocks=1 00:07:50.542 --rc geninfo_unexecuted_blocks=1 00:07:50.542 00:07:50.542 ' 00:07:50.542 18:16:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.542 18:16:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.542 18:16:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.542 18:16:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.542 18:16:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.542 18:16:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.542 18:16:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.542 18:16:48 -- paths/export.sh@5 -- # export PATH 00:07:50.542 18:16:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.542 18:16:48 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:50.542 18:16:48 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:50.542 18:16:48 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:50.542 18:16:48 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:50.542 18:16:48 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:50.542 18:16:48 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:50.542 18:16:48 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:50.542 18:16:48 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.542 18:16:48 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.542 18:16:48 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:50.542 18:16:48 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:50.542 18:16:48 -- dd/common.sh@126 -- # mapfile -t id 00:07:50.542 18:16:48 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:50.804 18:16:48 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 95 Data Units Written: 9 Host Read Commands: 2184 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:50.804 18:16:48 -- dd/common.sh@130 -- # lbaf=04 00:07:50.805 18:16:48 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 95 Data Units Written: 9 Host Read Commands: 2184 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:50.805 18:16:48 -- dd/common.sh@132 -- # lbaf=4096 00:07:50.805 18:16:48 -- dd/common.sh@134 -- # echo 4096 00:07:50.805 18:16:48 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:50.805 18:16:48 -- dd/basic_rw.sh@96 -- # : 00:07:50.805 18:16:48 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.805 18:16:48 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:50.805 18:16:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:50.805 18:16:48 -- dd/common.sh@31 -- # xtrace_disable 00:07:50.805 18:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.805 18:16:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.805 18:16:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.805 ************************************ 00:07:50.805 START TEST dd_bs_lt_native_bs 00:07:50.805 ************************************ 00:07:50.805 18:16:48 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.805 18:16:48 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.805 18:16:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.805 18:16:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.805 18:16:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.805 18:16:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.805 18:16:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.805 18:16:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.805 18:16:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.805 18:16:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.805 18:16:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.805 18:16:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.805 { 00:07:50.805 "subsystems": [ 00:07:50.805 { 00:07:50.805 "subsystem": "bdev", 00:07:50.805 "config": [ 00:07:50.805 { 00:07:50.805 "params": { 00:07:50.805 "trtype": "pcie", 00:07:50.805 "traddr": "0000:00:06.0", 00:07:50.805 "name": "Nvme0" 00:07:50.805 }, 00:07:50.805 "method": "bdev_nvme_attach_controller" 00:07:50.805 }, 00:07:50.805 { 00:07:50.805 "method": "bdev_wait_for_examine" 00:07:50.805 } 00:07:50.805 ] 00:07:50.805 } 00:07:50.805 ] 00:07:50.805 } 00:07:50.805 [2024-11-17 18:16:48.911791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.805 [2024-11-17 18:16:48.912255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69408 ] 00:07:50.805 [2024-11-17 18:16:49.052624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.105 [2024-11-17 18:16:49.091942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.105 [2024-11-17 18:16:49.208026] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:51.105 [2024-11-17 18:16:49.208096] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.105 [2024-11-17 18:16:49.277691] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:51.383 18:16:49 -- common/autotest_common.sh@653 -- # es=234 00:07:51.383 18:16:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.383 ************************************ 00:07:51.383 END TEST dd_bs_lt_native_bs 00:07:51.383 ************************************ 00:07:51.383 18:16:49 -- common/autotest_common.sh@662 -- # es=106 00:07:51.383 18:16:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.383 18:16:49 -- common/autotest_common.sh@670 -- # es=1 00:07:51.383 18:16:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.383 00:07:51.383 real 0m0.489s 00:07:51.383 user 0m0.327s 00:07:51.383 sys 0m0.120s 00:07:51.383 18:16:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.383 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 18:16:49 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:51.383 18:16:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.383 18:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.383 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 ************************************ 00:07:51.383 START TEST dd_rw 00:07:51.383 ************************************ 00:07:51.383 18:16:49 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:51.383 18:16:49 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:51.383 18:16:49 -- dd/basic_rw.sh@12 -- # local count size 00:07:51.383 18:16:49 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:51.383 18:16:49 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:51.383 18:16:49 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:51.383 18:16:49 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:51.383 18:16:49 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:51.383 18:16:49 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:51.383 18:16:49 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:51.383 18:16:49 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:51.383 18:16:49 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:51.383 18:16:49 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:51.383 18:16:49 -- dd/basic_rw.sh@23 -- # count=15 00:07:51.383 18:16:49 -- dd/basic_rw.sh@24 -- # count=15 00:07:51.383 18:16:49 -- dd/basic_rw.sh@25 -- # size=61440 00:07:51.383 18:16:49 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:51.383 18:16:49 -- dd/common.sh@98 -- # xtrace_disable 00:07:51.383 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.984 18:16:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:51.984 18:16:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:51.984 18:16:50 -- dd/common.sh@31 -- # xtrace_disable 00:07:51.984 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:07:51.984 [2024-11-17 18:16:50.092207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.984 [2024-11-17 18:16:50.092949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69443 ] 00:07:51.984 { 00:07:51.984 "subsystems": [ 00:07:51.984 { 00:07:51.984 "subsystem": "bdev", 00:07:51.984 "config": [ 00:07:51.984 { 00:07:51.984 "params": { 00:07:51.984 "trtype": "pcie", 00:07:51.984 "traddr": "0000:00:06.0", 00:07:51.984 "name": "Nvme0" 00:07:51.984 }, 00:07:51.984 "method": "bdev_nvme_attach_controller" 00:07:51.984 }, 00:07:51.984 { 00:07:51.984 "method": "bdev_wait_for_examine" 00:07:51.984 } 00:07:51.984 ] 00:07:51.984 } 00:07:51.984 ] 00:07:51.984 } 00:07:51.984 [2024-11-17 18:16:50.232215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.244 [2024-11-17 18:16:50.271418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.244  [2024-11-17T18:16:50.771Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:52.504 00:07:52.504 18:16:50 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:52.504 18:16:50 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:52.504 18:16:50 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.504 18:16:50 -- common/autotest_common.sh@10 -- # set +x 00:07:52.504 [2024-11-17 18:16:50.586416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.504 [2024-11-17 18:16:50.586679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69456 ] 00:07:52.504 { 00:07:52.504 "subsystems": [ 00:07:52.504 { 00:07:52.504 "subsystem": "bdev", 00:07:52.504 "config": [ 00:07:52.504 { 00:07:52.504 "params": { 00:07:52.504 "trtype": "pcie", 00:07:52.504 "traddr": "0000:00:06.0", 00:07:52.504 "name": "Nvme0" 00:07:52.504 }, 00:07:52.504 "method": "bdev_nvme_attach_controller" 00:07:52.504 }, 00:07:52.504 { 00:07:52.504 "method": "bdev_wait_for_examine" 00:07:52.504 } 00:07:52.504 ] 00:07:52.504 } 00:07:52.504 ] 00:07:52.504 } 00:07:52.504 [2024-11-17 18:16:50.722951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.504 [2024-11-17 18:16:50.752218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.764  [2024-11-17T18:16:51.031Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:52.764 00:07:52.764 18:16:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.764 18:16:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:52.764 18:16:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.764 18:16:51 -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.764 18:16:51 -- dd/common.sh@12 -- # local size=61440 00:07:52.764 18:16:51 -- dd/common.sh@14 -- # local bs=1048576 00:07:52.764 18:16:51 -- dd/common.sh@15 -- # local count=1 00:07:52.764 18:16:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.764 18:16:51 -- dd/common.sh@18 -- # gen_conf 00:07:52.764 18:16:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.764 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.023 [2024-11-17 18:16:51.054525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.023 [2024-11-17 18:16:51.054611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69468 ] 00:07:53.023 { 00:07:53.023 "subsystems": [ 00:07:53.023 { 00:07:53.023 "subsystem": "bdev", 00:07:53.023 "config": [ 00:07:53.023 { 00:07:53.023 "params": { 00:07:53.023 "trtype": "pcie", 00:07:53.023 "traddr": "0000:00:06.0", 00:07:53.023 "name": "Nvme0" 00:07:53.023 }, 00:07:53.023 "method": "bdev_nvme_attach_controller" 00:07:53.023 }, 00:07:53.023 { 00:07:53.023 "method": "bdev_wait_for_examine" 00:07:53.023 } 00:07:53.023 ] 00:07:53.023 } 00:07:53.023 ] 00:07:53.023 } 00:07:53.023 [2024-11-17 18:16:51.193598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.023 [2024-11-17 18:16:51.223023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.283  [2024-11-17T18:16:51.550Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:53.283 00:07:53.283 18:16:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.283 18:16:51 -- dd/basic_rw.sh@23 -- # count=15 00:07:53.283 18:16:51 -- dd/basic_rw.sh@24 -- # count=15 00:07:53.283 18:16:51 -- dd/basic_rw.sh@25 -- # size=61440 00:07:53.283 18:16:51 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:53.283 18:16:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:53.283 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 18:16:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:53.851 18:16:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:53.851 18:16:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:53.851 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 [2024-11-17 18:16:52.081244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.851 [2024-11-17 18:16:52.081505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69486 ] 00:07:53.851 { 00:07:53.851 "subsystems": [ 00:07:53.851 { 00:07:53.851 "subsystem": "bdev", 00:07:53.851 "config": [ 00:07:53.851 { 00:07:53.851 "params": { 00:07:53.851 "trtype": "pcie", 00:07:53.851 "traddr": "0000:00:06.0", 00:07:53.851 "name": "Nvme0" 00:07:53.851 }, 00:07:53.851 "method": "bdev_nvme_attach_controller" 00:07:53.851 }, 00:07:53.851 { 00:07:53.851 "method": "bdev_wait_for_examine" 00:07:53.851 } 00:07:53.851 ] 00:07:53.851 } 00:07:53.851 ] 00:07:53.851 } 00:07:54.110 [2024-11-17 18:16:52.210627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.110 [2024-11-17 18:16:52.240277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.110  [2024-11-17T18:16:52.636Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:54.369 00:07:54.369 18:16:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:54.369 18:16:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:54.369 18:16:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.369 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:54.369 [2024-11-17 18:16:52.542334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.369 [2024-11-17 18:16:52.542445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69500 ] 00:07:54.369 { 00:07:54.369 "subsystems": [ 00:07:54.369 { 00:07:54.369 "subsystem": "bdev", 00:07:54.369 "config": [ 00:07:54.369 { 00:07:54.369 "params": { 00:07:54.369 "trtype": "pcie", 00:07:54.369 "traddr": "0000:00:06.0", 00:07:54.369 "name": "Nvme0" 00:07:54.369 }, 00:07:54.369 "method": "bdev_nvme_attach_controller" 00:07:54.369 }, 00:07:54.369 { 00:07:54.369 "method": "bdev_wait_for_examine" 00:07:54.369 } 00:07:54.369 ] 00:07:54.369 } 00:07:54.369 ] 00:07:54.369 } 00:07:54.628 [2024-11-17 18:16:52.681540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.628 [2024-11-17 18:16:52.719848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.628  [2024-11-17T18:16:53.155Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:54.888 00:07:54.888 18:16:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.888 18:16:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:54.888 18:16:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:54.888 18:16:52 -- dd/common.sh@11 -- # local nvme_ref= 00:07:54.888 18:16:52 -- dd/common.sh@12 -- # local size=61440 00:07:54.888 18:16:52 -- dd/common.sh@14 -- # local bs=1048576 00:07:54.888 18:16:52 -- dd/common.sh@15 -- # local count=1 00:07:54.888 18:16:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:54.888 18:16:52 -- dd/common.sh@18 -- # gen_conf 00:07:54.888 18:16:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.888 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:54.888 [2024-11-17 18:16:53.042440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.888 [2024-11-17 18:16:53.042530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69508 ] 00:07:54.888 { 00:07:54.888 "subsystems": [ 00:07:54.888 { 00:07:54.888 "subsystem": "bdev", 00:07:54.888 "config": [ 00:07:54.888 { 00:07:54.888 "params": { 00:07:54.888 "trtype": "pcie", 00:07:54.888 "traddr": "0000:00:06.0", 00:07:54.888 "name": "Nvme0" 00:07:54.888 }, 00:07:54.888 "method": "bdev_nvme_attach_controller" 00:07:54.888 }, 00:07:54.888 { 00:07:54.888 "method": "bdev_wait_for_examine" 00:07:54.888 } 00:07:54.888 ] 00:07:54.888 } 00:07:54.888 ] 00:07:54.888 } 00:07:55.147 [2024-11-17 18:16:53.178614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.147 [2024-11-17 18:16:53.207972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.147  [2024-11-17T18:16:53.673Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:55.406 00:07:55.406 18:16:53 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:55.406 18:16:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:55.406 18:16:53 -- dd/basic_rw.sh@23 -- # count=7 00:07:55.406 18:16:53 -- dd/basic_rw.sh@24 -- # count=7 00:07:55.406 18:16:53 -- dd/basic_rw.sh@25 -- # size=57344 00:07:55.406 18:16:53 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:55.406 18:16:53 -- dd/common.sh@98 -- # xtrace_disable 00:07:55.406 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:07:55.974 18:16:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:55.974 18:16:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:55.974 18:16:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:55.974 18:16:53 -- common/autotest_common.sh@10 -- # set +x 00:07:55.974 [2024-11-17 18:16:54.040712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.974 [2024-11-17 18:16:54.041002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69526 ] 00:07:55.974 { 00:07:55.974 "subsystems": [ 00:07:55.974 { 00:07:55.974 "subsystem": "bdev", 00:07:55.974 "config": [ 00:07:55.974 { 00:07:55.974 "params": { 00:07:55.974 "trtype": "pcie", 00:07:55.974 "traddr": "0000:00:06.0", 00:07:55.974 "name": "Nvme0" 00:07:55.974 }, 00:07:55.974 "method": "bdev_nvme_attach_controller" 00:07:55.974 }, 00:07:55.974 { 00:07:55.974 "method": "bdev_wait_for_examine" 00:07:55.974 } 00:07:55.974 ] 00:07:55.974 } 00:07:55.974 ] 00:07:55.974 } 00:07:55.974 [2024-11-17 18:16:54.179037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.974 [2024-11-17 18:16:54.210807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.233  [2024-11-17T18:16:54.500Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:56.233 00:07:56.233 18:16:54 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:56.233 18:16:54 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:56.233 18:16:54 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.233 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.492 [2024-11-17 18:16:54.528089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.492 [2024-11-17 18:16:54.528195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69544 ] 00:07:56.492 { 00:07:56.492 "subsystems": [ 00:07:56.492 { 00:07:56.492 "subsystem": "bdev", 00:07:56.492 "config": [ 00:07:56.492 { 00:07:56.492 "params": { 00:07:56.492 "trtype": "pcie", 00:07:56.492 "traddr": "0000:00:06.0", 00:07:56.492 "name": "Nvme0" 00:07:56.492 }, 00:07:56.492 "method": "bdev_nvme_attach_controller" 00:07:56.492 }, 00:07:56.492 { 00:07:56.492 "method": "bdev_wait_for_examine" 00:07:56.492 } 00:07:56.492 ] 00:07:56.492 } 00:07:56.492 ] 00:07:56.492 } 00:07:56.492 [2024-11-17 18:16:54.663038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.492 [2024-11-17 18:16:54.692570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.751  [2024-11-17T18:16:55.018Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:56.751 00:07:56.751 18:16:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.751 18:16:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:56.751 18:16:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:56.751 18:16:54 -- dd/common.sh@11 -- # local nvme_ref= 00:07:56.751 18:16:54 -- dd/common.sh@12 -- # local size=57344 00:07:56.751 18:16:54 -- dd/common.sh@14 -- # local bs=1048576 00:07:56.751 18:16:54 -- dd/common.sh@15 -- # local count=1 00:07:56.751 18:16:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:56.751 18:16:54 -- dd/common.sh@18 -- # gen_conf 00:07:56.751 18:16:54 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.751 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.751 [2024-11-17 18:16:54.983093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.752 [2024-11-17 18:16:54.983382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69552 ] 00:07:56.752 { 00:07:56.752 "subsystems": [ 00:07:56.752 { 00:07:56.752 "subsystem": "bdev", 00:07:56.752 "config": [ 00:07:56.752 { 00:07:56.752 "params": { 00:07:56.752 "trtype": "pcie", 00:07:56.752 "traddr": "0000:00:06.0", 00:07:56.752 "name": "Nvme0" 00:07:56.752 }, 00:07:56.752 "method": "bdev_nvme_attach_controller" 00:07:56.752 }, 00:07:56.752 { 00:07:56.752 "method": "bdev_wait_for_examine" 00:07:56.752 } 00:07:56.752 ] 00:07:56.752 } 00:07:56.752 ] 00:07:56.752 } 00:07:57.011 [2024-11-17 18:16:55.118110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.011 [2024-11-17 18:16:55.147605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.011  [2024-11-17T18:16:55.538Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:57.271 00:07:57.271 18:16:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:57.271 18:16:55 -- dd/basic_rw.sh@23 -- # count=7 00:07:57.271 18:16:55 -- dd/basic_rw.sh@24 -- # count=7 00:07:57.271 18:16:55 -- dd/basic_rw.sh@25 -- # size=57344 00:07:57.271 18:16:55 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:57.271 18:16:55 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.271 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.839 18:16:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:57.839 18:16:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:57.839 18:16:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.839 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.839 [2024-11-17 18:16:55.977022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:57.839 [2024-11-17 18:16:55.977341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69570 ] 00:07:57.839 { 00:07:57.839 "subsystems": [ 00:07:57.839 { 00:07:57.839 "subsystem": "bdev", 00:07:57.839 "config": [ 00:07:57.839 { 00:07:57.839 "params": { 00:07:57.839 "trtype": "pcie", 00:07:57.839 "traddr": "0000:00:06.0", 00:07:57.839 "name": "Nvme0" 00:07:57.839 }, 00:07:57.839 "method": "bdev_nvme_attach_controller" 00:07:57.839 }, 00:07:57.839 { 00:07:57.839 "method": "bdev_wait_for_examine" 00:07:57.839 } 00:07:57.839 ] 00:07:57.839 } 00:07:57.839 ] 00:07:57.839 } 00:07:58.098 [2024-11-17 18:16:56.115482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.098 [2024-11-17 18:16:56.145073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.098  [2024-11-17T18:16:56.624Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:58.357 00:07:58.357 18:16:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:58.357 18:16:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:58.357 18:16:56 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.357 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.357 [2024-11-17 18:16:56.447595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.357 [2024-11-17 18:16:56.447717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69582 ] 00:07:58.357 { 00:07:58.357 "subsystems": [ 00:07:58.357 { 00:07:58.357 "subsystem": "bdev", 00:07:58.357 "config": [ 00:07:58.357 { 00:07:58.357 "params": { 00:07:58.357 "trtype": "pcie", 00:07:58.357 "traddr": "0000:00:06.0", 00:07:58.357 "name": "Nvme0" 00:07:58.357 }, 00:07:58.357 "method": "bdev_nvme_attach_controller" 00:07:58.357 }, 00:07:58.357 { 00:07:58.357 "method": "bdev_wait_for_examine" 00:07:58.357 } 00:07:58.357 ] 00:07:58.357 } 00:07:58.357 ] 00:07:58.357 } 00:07:58.357 [2024-11-17 18:16:56.583410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.357 [2024-11-17 18:16:56.612489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.616  [2024-11-17T18:16:56.883Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:58.616 00:07:58.616 18:16:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.616 18:16:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:58.616 18:16:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.616 18:16:56 -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.616 18:16:56 -- dd/common.sh@12 -- # local size=57344 00:07:58.616 18:16:56 -- dd/common.sh@14 -- # local bs=1048576 00:07:58.616 18:16:56 -- dd/common.sh@15 -- # local count=1 00:07:58.616 18:16:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.616 18:16:56 -- dd/common.sh@18 -- # gen_conf 00:07:58.616 18:16:56 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.616 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.875 [2024-11-17 18:16:56.918503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.876 [2024-11-17 18:16:56.918600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69596 ] 00:07:58.876 { 00:07:58.876 "subsystems": [ 00:07:58.876 { 00:07:58.876 "subsystem": "bdev", 00:07:58.876 "config": [ 00:07:58.876 { 00:07:58.876 "params": { 00:07:58.876 "trtype": "pcie", 00:07:58.876 "traddr": "0000:00:06.0", 00:07:58.876 "name": "Nvme0" 00:07:58.876 }, 00:07:58.876 "method": "bdev_nvme_attach_controller" 00:07:58.876 }, 00:07:58.876 { 00:07:58.876 "method": "bdev_wait_for_examine" 00:07:58.876 } 00:07:58.876 ] 00:07:58.876 } 00:07:58.876 ] 00:07:58.876 } 00:07:58.876 [2024-11-17 18:16:57.053590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.876 [2024-11-17 18:16:57.082826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.135  [2024-11-17T18:16:57.402Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:59.135 00:07:59.135 18:16:57 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:59.135 18:16:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:59.135 18:16:57 -- dd/basic_rw.sh@23 -- # count=3 00:07:59.135 18:16:57 -- dd/basic_rw.sh@24 -- # count=3 00:07:59.135 18:16:57 -- dd/basic_rw.sh@25 -- # size=49152 00:07:59.135 18:16:57 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:59.135 18:16:57 -- dd/common.sh@98 -- # xtrace_disable 00:07:59.135 18:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:59.702 18:16:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:59.703 18:16:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.703 18:16:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.703 18:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:59.703 [2024-11-17 18:16:57.860998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.703 [2024-11-17 18:16:57.861108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69614 ] 00:07:59.703 { 00:07:59.703 "subsystems": [ 00:07:59.703 { 00:07:59.703 "subsystem": "bdev", 00:07:59.703 "config": [ 00:07:59.703 { 00:07:59.703 "params": { 00:07:59.703 "trtype": "pcie", 00:07:59.703 "traddr": "0000:00:06.0", 00:07:59.703 "name": "Nvme0" 00:07:59.703 }, 00:07:59.703 "method": "bdev_nvme_attach_controller" 00:07:59.703 }, 00:07:59.703 { 00:07:59.703 "method": "bdev_wait_for_examine" 00:07:59.703 } 00:07:59.703 ] 00:07:59.703 } 00:07:59.703 ] 00:07:59.703 } 00:07:59.962 [2024-11-17 18:16:57.998673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.962 [2024-11-17 18:16:58.028258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.962  [2024-11-17T18:16:58.488Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:00.221 00:08:00.221 18:16:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:00.221 18:16:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:00.221 18:16:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.221 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:08:00.221 [2024-11-17 18:16:58.329990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.221 [2024-11-17 18:16:58.330090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69621 ] 00:08:00.221 { 00:08:00.221 "subsystems": [ 00:08:00.221 { 00:08:00.221 "subsystem": "bdev", 00:08:00.221 "config": [ 00:08:00.221 { 00:08:00.221 "params": { 00:08:00.222 "trtype": "pcie", 00:08:00.222 "traddr": "0000:00:06.0", 00:08:00.222 "name": "Nvme0" 00:08:00.222 }, 00:08:00.222 "method": "bdev_nvme_attach_controller" 00:08:00.222 }, 00:08:00.222 { 00:08:00.222 "method": "bdev_wait_for_examine" 00:08:00.222 } 00:08:00.222 ] 00:08:00.222 } 00:08:00.222 ] 00:08:00.222 } 00:08:00.222 [2024-11-17 18:16:58.466708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.481 [2024-11-17 18:16:58.497877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.481  [2024-11-17T18:16:59.007Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:00.740 00:08:00.740 18:16:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.740 18:16:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:00.740 18:16:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.740 18:16:58 -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.740 18:16:58 -- dd/common.sh@12 -- # local size=49152 00:08:00.740 18:16:58 -- dd/common.sh@14 -- # local bs=1048576 00:08:00.740 18:16:58 -- dd/common.sh@15 -- # local count=1 00:08:00.740 18:16:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:00.740 18:16:58 -- dd/common.sh@18 -- # gen_conf 00:08:00.740 18:16:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.740 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:08:00.740 [2024-11-17 18:16:58.810709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.740 [2024-11-17 18:16:58.810816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69640 ] 00:08:00.740 { 00:08:00.740 "subsystems": [ 00:08:00.740 { 00:08:00.740 "subsystem": "bdev", 00:08:00.740 "config": [ 00:08:00.740 { 00:08:00.740 "params": { 00:08:00.740 "trtype": "pcie", 00:08:00.740 "traddr": "0000:00:06.0", 00:08:00.740 "name": "Nvme0" 00:08:00.740 }, 00:08:00.740 "method": "bdev_nvme_attach_controller" 00:08:00.740 }, 00:08:00.740 { 00:08:00.740 "method": "bdev_wait_for_examine" 00:08:00.740 } 00:08:00.740 ] 00:08:00.740 } 00:08:00.741 ] 00:08:00.741 } 00:08:00.741 [2024-11-17 18:16:58.947091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.741 [2024-11-17 18:16:58.976393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.000  [2024-11-17T18:16:59.267Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:01.000 00:08:01.000 18:16:59 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:01.000 18:16:59 -- dd/basic_rw.sh@23 -- # count=3 00:08:01.000 18:16:59 -- dd/basic_rw.sh@24 -- # count=3 00:08:01.000 18:16:59 -- dd/basic_rw.sh@25 -- # size=49152 00:08:01.000 18:16:59 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:01.000 18:16:59 -- dd/common.sh@98 -- # xtrace_disable 00:08:01.000 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.570 18:16:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:01.570 18:16:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:01.570 18:16:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.570 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.570 [2024-11-17 18:16:59.755383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.570 [2024-11-17 18:16:59.755506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69658 ] 00:08:01.570 { 00:08:01.570 "subsystems": [ 00:08:01.570 { 00:08:01.570 "subsystem": "bdev", 00:08:01.570 "config": [ 00:08:01.570 { 00:08:01.570 "params": { 00:08:01.570 "trtype": "pcie", 00:08:01.570 "traddr": "0000:00:06.0", 00:08:01.570 "name": "Nvme0" 00:08:01.570 }, 00:08:01.570 "method": "bdev_nvme_attach_controller" 00:08:01.570 }, 00:08:01.570 { 00:08:01.570 "method": "bdev_wait_for_examine" 00:08:01.570 } 00:08:01.570 ] 00:08:01.570 } 00:08:01.570 ] 00:08:01.570 } 00:08:01.829 [2024-11-17 18:16:59.888193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.830 [2024-11-17 18:16:59.917455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.830  [2024-11-17T18:17:00.356Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:02.089 00:08:02.089 18:17:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:02.089 18:17:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:02.089 18:17:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.089 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.089 { 00:08:02.089 "subsystems": [ 00:08:02.089 { 00:08:02.089 "subsystem": "bdev", 00:08:02.089 "config": [ 00:08:02.089 { 00:08:02.089 "params": { 00:08:02.089 "trtype": "pcie", 00:08:02.089 "traddr": "0000:00:06.0", 00:08:02.089 "name": "Nvme0" 00:08:02.089 }, 00:08:02.089 "method": "bdev_nvme_attach_controller" 00:08:02.089 }, 00:08:02.089 { 00:08:02.089 "method": "bdev_wait_for_examine" 00:08:02.089 } 00:08:02.089 ] 00:08:02.089 } 00:08:02.089 ] 00:08:02.089 } 00:08:02.089 [2024-11-17 18:17:00.221097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.089 [2024-11-17 18:17:00.221193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69665 ] 00:08:02.349 [2024-11-17 18:17:00.357461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.349 [2024-11-17 18:17:00.386967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.349  [2024-11-17T18:17:00.876Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:02.609 00:08:02.609 18:17:00 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.609 18:17:00 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:02.609 18:17:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:02.609 18:17:00 -- dd/common.sh@11 -- # local nvme_ref= 00:08:02.609 18:17:00 -- dd/common.sh@12 -- # local size=49152 00:08:02.609 18:17:00 -- dd/common.sh@14 -- # local bs=1048576 00:08:02.609 18:17:00 -- dd/common.sh@15 -- # local count=1 00:08:02.609 18:17:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:02.609 18:17:00 -- dd/common.sh@18 -- # gen_conf 00:08:02.609 18:17:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.609 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.609 [2024-11-17 18:17:00.704021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.609 [2024-11-17 18:17:00.704574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69684 ] 00:08:02.609 { 00:08:02.609 "subsystems": [ 00:08:02.609 { 00:08:02.609 "subsystem": "bdev", 00:08:02.609 "config": [ 00:08:02.609 { 00:08:02.609 "params": { 00:08:02.609 "trtype": "pcie", 00:08:02.609 "traddr": "0000:00:06.0", 00:08:02.609 "name": "Nvme0" 00:08:02.609 }, 00:08:02.609 "method": "bdev_nvme_attach_controller" 00:08:02.609 }, 00:08:02.609 { 00:08:02.609 "method": "bdev_wait_for_examine" 00:08:02.609 } 00:08:02.609 ] 00:08:02.609 } 00:08:02.609 ] 00:08:02.609 } 00:08:02.609 [2024-11-17 18:17:00.839823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.609 [2024-11-17 18:17:00.869053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.870  [2024-11-17T18:17:01.137Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:02.870 00:08:02.870 00:08:02.870 real 0m11.721s 00:08:02.870 user 0m8.526s 00:08:02.870 sys 0m2.074s 00:08:02.870 18:17:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.870 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:02.870 ************************************ 00:08:02.870 END TEST dd_rw 00:08:02.870 ************************************ 00:08:03.130 18:17:01 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:03.130 18:17:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.130 18:17:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.130 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.130 ************************************ 00:08:03.130 START TEST dd_rw_offset 00:08:03.130 ************************************ 00:08:03.130 18:17:01 -- common/autotest_common.sh@1114 -- # basic_offset 00:08:03.131 18:17:01 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:03.131 18:17:01 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:03.131 18:17:01 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.131 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.131 18:17:01 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:03.131 18:17:01 -- dd/basic_rw.sh@56 -- # data=2k38ub939xgv23qe78vk625wgmo21l1ybgl22291hwgkzf87kwq37ez9hhcpo4jqacmivp5ktein3i3z5j9rylkeclqli7l6dlarnw54y6lnwlic5tz23tc7gi26d3uvm6jna5wa5bgeptcuhwk7du6cae6brupv3iztcoskvi9afux1g4722tzkbgfa56qxm1suw3q61apmpqtd98vgwqkn99cswp5udz3hlhngeg5fbj2mdbfu9l5lv8t79m7j0g5rifv7l5oiv1h6fguemtdxozd9mr6tayz12i7kj9bg0maitlqsdksawwzwz2xq9h7w3f61otgth81ifxkpr0u9ht89uxaqugz039zvadpn7g2bh2e611opbeb58535a0cs523kixqx8lyyljusp937v18dpheqy28qt82h1mb008rqj75yfso18hxtmgfddot08iurfqofi4vefawe9dwout3zpdkhxjnctwwklsg5kymiqajdornidobmnenuiimc8sumxdhcljyztqg7qn2vs619gaal6j3husbcgs32lgjxty817n0za4di3vycwkct7iyatt44rpybp0akzezwa2y61icuul6h2itj2f6kte2anciww6evxtjhxvgdwwfgwhqywfienaydabbrq89hghsoje1j57m1mk9tvjbro7ozoaue5jokpuqh95wgs97k455kj6dfjm85axydpycqjetq0dcd99xt6puve7or8woouw913eult0m5oxuqmtdd882wkswq6f4xe1q4bz74wj7yy4w3jslx1qqupvz2w6pf6nup1aefmbvkw5xju8yquxj6uo2shlqzstjmakzcf5bh3dpwiejh8xj0syt1zgzqdbg2ur1an4p2yx6d7erz16sx0abuac20pxdrm2fu0gypizkc4yjvq6p4tflwq30e14o53rvx36ujqx0oms9zot1t4edh3klnx8avica12vc74ajxwhhlritshurdxj7rq4fzt7nd40oece6g8eqc5xfwr8ihut8qqeapi5prj774knc4cdy85na0rqgv6wphnnkwxiqpkff37l9r8ol86vs4odd3o77ag1lcgoobvksvvedfmlnk53mtlo8volopor9z6gx2hx5h3yiktzwgwqdkn0g9wlccewzt5z6z00jsphghmyrp9ttifbv9vsremogb0hrg1yzj9dpnp337n6p10eu36yx39yx8x0wyfv7l2l4prydkcnim3u4tdffvwm3o8p8vgs0hzk1gpqb24k8e7knbua36prc1t568oa5jml880iju3zd11k3d6pdh6pa6lbhsjn8xlg2kfengwex3y1fdutp90dd3wvx8owwctw2pt8lf79rqd4wh8pv48qzq03n3s6e9pv3hw4y8ry87xmap7klil09k2p04s73ldg0s1ci28725soiaheznlcio4nyyb9d167ywjw4cszk56nby73o6l3t2cz6p2d7mwf82ibn6n3i9zpq19how6id80qx3ia722gfgz3ta9358zcgqkbdc22fcanwejjqo80uqykcrcb5spx067u26e8kal11z5ya9ys3zaq32n2layv77ia28ghtvnu6d4fy73n1eg9swqp3exo9wpjdo8ebitr7ib7ykk3wjx0njh1qfxqnmt0gb99o2q3b2h4w6fi1pevybesxw5jzgy0ntdi1ux92fvdflgo8qdgqpetgtk03ha7m53b1t4hd81rs37u0mkoaqh8noifbqdryzseoys4bynfbmrdmy8yptooz2btn4x79zgprumwl1nureic4zdb8yynuzcytnmgtmyi02fstj8r37egrrglaqvhe1yelig1wioq7xo70pslv3f1zzgob5ct2maqp5vrncc2fwwqusvccf7ezf482ca6tpnpeut8gkbyz0t98cnf4tw0g6im1yvlmr17e59toi24kp1js35xxzz6qhcedcerktm7flto7b904cpoif5vli7l95ehxpnxxr6mv0785ol1u73wru31lp40g8008qhnvatwdgiphqhh4kxliwl5hs6k5sqok5m3os2cqkouof8t1ujbxzsdgehab6k8sglxepg6mz9armd3wvt4ks02f123jhg5oeq9kz6wulvhmrdn76mikp8cuqfc6wxa0l53u1ha45mv1cnfyo5m0f93i8zs5dq8gjvenah2z1b6ta7hc1rk8jkpp2hvrbg9jxitvy13hko6bmt1wotxqc1t4kom81di3wr3br6isovc42o2wehb2vkhu4h8vyaa5q570qzfrchkcogh5fus7dbvkdnlssnu8t7vkk4oxip7onpo9flc9mtb21iywvjzjmee3nl5r1g0807tp7dbmiqvdzeyok3hh6i4a397k6b8p5sxwx0ps3r5rzamzexiy3u3v3l9gbp12vgkv4iza3cn8h3rz5qp8udot9u5ax1165hkoqwjnpud5q8b7oq6wv9fcuu713vbpwyq5mcze3ef8xbt9blei3gpl8u8zcse2gqlncjnqh4mu4eb15l5kda2pyckpj3tinyvdiari3mc85ubsj76dljsmo4z5h72hvbgnetflqwztheawdrozet8yab3pqumhkc98mk466n7cjoua9xr7irmtf1fspmti0ewxqqflx8h58jfa12g70bytei9nw6mz0bslcuqt0jzb34slu80gmradlf59povxzqashdacxyvh0eowtbvyoohpyr1orfhfn7zluz6ug2js1gs9vy1ebwzff687bbqk81svhj8ilufz6s9qfifc50dgmkocv9fhmzti6cfb68t1o6jq8nlep9df06fnx0smhtq9wz7e098l42dx0lnc0k775w70yj8tn82erf9dexltdp06raonhppjvour2pkpfstvl27hcq8zoyb0zj8hnjvfiu4jsrgchj989y7v80ejvqaagso347zbyolxpw2i3jkl2zzlptvgutkyxnzfr695m5q18pxk2pr79wmsiz4ykgeh6v2ztz7yby36b4vkxsnpb5naw1qy6tx6fcl7d1l2rpdfmql2ak5vkj0o4vuzvfj3ikexmj4lrsl3afhs5vxudmn8jxjlkirbeqg11wrdf2zrsut8ke50z7jxvvtok4ashrq6hn5z0y5cmq48t3ws13qky0xajhixmu9rq3j0zipxj998pr0wdoeewlw6c1p308hg6vh3lorw3newz0k18immpbq4w7wpf1pawvsa3m9fnczeuhfgq280e9da9z4oobg18c2mqsah4zqf3knyvi10yms1tq7t3jyy4czlpny347lg7oz0tt1vja2quoplzdokpjj5dsdhc94h6g3lminon7ui47un9hzf6qbxvc0ccdlivitmym36ysouq5kkq02x3uqxxu41sf4qym3qq6p3i0fz21qopcgrdpnyedii08q5y474jhre78p7b6te9w62t736jwuuq0tcjx68p44cupqzmpqvksa7qtwqce2rsxc17ugkby06rri0bnyjuk6dje6w7s3pmkfklxomz3jxzjdzim1fiwy2jw01vlv95ro2iacagg8otgusqofq063xdgw1cc582s8tyw69d4slou9d4jbrjlehpsrj4dmxm46ia9jfw5ybkm2npurndr3hif7w0tqe62q4gavj8pfrcuy8c2ppkijncvssi8hlm3jr5buqvduc00qg9mhtdvi8pddl83l41t2vbazhommrm238sontxgriokc15a7kz50z29pso1l34j4ggqikzw0wpjmvhag89vqx0lhlyeept8b28cplqzwyh3yxjpvulyoy65hv8gwdqck368ksl32531o60c2yl4hp22l19ejwq1urevpvbuu1j5kke0v8movdvcyddf9j3i4m7ttjlqyusq33nel83ggom4vmurx0cqxd614rc8gzetqpgy28em7voyfuiuep6bgf6u4tbfnq56f1qgzz6e16yh3h0413r1rmalshe37u66iyne3mmk9fh8wrudyv0aly1wyuz9kjzyyt7f8bn8zdn16msa3odgd82uas6 00:08:03.131 18:17:01 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:03.131 18:17:01 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:03.131 18:17:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.131 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.131 { 00:08:03.131 "subsystems": [ 00:08:03.131 { 00:08:03.131 "subsystem": "bdev", 00:08:03.131 "config": [ 00:08:03.131 { 00:08:03.131 "params": { 00:08:03.131 "trtype": "pcie", 00:08:03.131 "traddr": "0000:00:06.0", 00:08:03.131 "name": "Nvme0" 00:08:03.131 }, 00:08:03.131 "method": "bdev_nvme_attach_controller" 00:08:03.131 }, 00:08:03.131 { 00:08:03.131 "method": "bdev_wait_for_examine" 00:08:03.131 } 00:08:03.131 ] 00:08:03.131 } 00:08:03.131 ] 00:08:03.131 } 00:08:03.131 [2024-11-17 18:17:01.279945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.131 [2024-11-17 18:17:01.280069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69708 ] 00:08:03.391 [2024-11-17 18:17:01.414210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.391 [2024-11-17 18:17:01.444503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.391  [2024-11-17T18:17:01.918Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:03.651 00:08:03.651 18:17:01 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:03.651 18:17:01 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:03.651 18:17:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.651 18:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.651 [2024-11-17 18:17:01.750202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.651 [2024-11-17 18:17:01.750344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69726 ] 00:08:03.651 { 00:08:03.651 "subsystems": [ 00:08:03.651 { 00:08:03.651 "subsystem": "bdev", 00:08:03.651 "config": [ 00:08:03.651 { 00:08:03.651 "params": { 00:08:03.651 "trtype": "pcie", 00:08:03.651 "traddr": "0000:00:06.0", 00:08:03.651 "name": "Nvme0" 00:08:03.651 }, 00:08:03.651 "method": "bdev_nvme_attach_controller" 00:08:03.651 }, 00:08:03.651 { 00:08:03.651 "method": "bdev_wait_for_examine" 00:08:03.651 } 00:08:03.651 ] 00:08:03.651 } 00:08:03.651 ] 00:08:03.651 } 00:08:03.651 [2024-11-17 18:17:01.886086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.651 [2024-11-17 18:17:01.915545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.913  [2024-11-17T18:17:02.180Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:03.913 00:08:03.913 18:17:02 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:03.913 ************************************ 00:08:03.913 END TEST dd_rw_offset 00:08:03.913 ************************************ 00:08:03.914 18:17:02 -- dd/basic_rw.sh@72 -- # [[ 2k38ub939xgv23qe78vk625wgmo21l1ybgl22291hwgkzf87kwq37ez9hhcpo4jqacmivp5ktein3i3z5j9rylkeclqli7l6dlarnw54y6lnwlic5tz23tc7gi26d3uvm6jna5wa5bgeptcuhwk7du6cae6brupv3iztcoskvi9afux1g4722tzkbgfa56qxm1suw3q61apmpqtd98vgwqkn99cswp5udz3hlhngeg5fbj2mdbfu9l5lv8t79m7j0g5rifv7l5oiv1h6fguemtdxozd9mr6tayz12i7kj9bg0maitlqsdksawwzwz2xq9h7w3f61otgth81ifxkpr0u9ht89uxaqugz039zvadpn7g2bh2e611opbeb58535a0cs523kixqx8lyyljusp937v18dpheqy28qt82h1mb008rqj75yfso18hxtmgfddot08iurfqofi4vefawe9dwout3zpdkhxjnctwwklsg5kymiqajdornidobmnenuiimc8sumxdhcljyztqg7qn2vs619gaal6j3husbcgs32lgjxty817n0za4di3vycwkct7iyatt44rpybp0akzezwa2y61icuul6h2itj2f6kte2anciww6evxtjhxvgdwwfgwhqywfienaydabbrq89hghsoje1j57m1mk9tvjbro7ozoaue5jokpuqh95wgs97k455kj6dfjm85axydpycqjetq0dcd99xt6puve7or8woouw913eult0m5oxuqmtdd882wkswq6f4xe1q4bz74wj7yy4w3jslx1qqupvz2w6pf6nup1aefmbvkw5xju8yquxj6uo2shlqzstjmakzcf5bh3dpwiejh8xj0syt1zgzqdbg2ur1an4p2yx6d7erz16sx0abuac20pxdrm2fu0gypizkc4yjvq6p4tflwq30e14o53rvx36ujqx0oms9zot1t4edh3klnx8avica12vc74ajxwhhlritshurdxj7rq4fzt7nd40oece6g8eqc5xfwr8ihut8qqeapi5prj774knc4cdy85na0rqgv6wphnnkwxiqpkff37l9r8ol86vs4odd3o77ag1lcgoobvksvvedfmlnk53mtlo8volopor9z6gx2hx5h3yiktzwgwqdkn0g9wlccewzt5z6z00jsphghmyrp9ttifbv9vsremogb0hrg1yzj9dpnp337n6p10eu36yx39yx8x0wyfv7l2l4prydkcnim3u4tdffvwm3o8p8vgs0hzk1gpqb24k8e7knbua36prc1t568oa5jml880iju3zd11k3d6pdh6pa6lbhsjn8xlg2kfengwex3y1fdutp90dd3wvx8owwctw2pt8lf79rqd4wh8pv48qzq03n3s6e9pv3hw4y8ry87xmap7klil09k2p04s73ldg0s1ci28725soiaheznlcio4nyyb9d167ywjw4cszk56nby73o6l3t2cz6p2d7mwf82ibn6n3i9zpq19how6id80qx3ia722gfgz3ta9358zcgqkbdc22fcanwejjqo80uqykcrcb5spx067u26e8kal11z5ya9ys3zaq32n2layv77ia28ghtvnu6d4fy73n1eg9swqp3exo9wpjdo8ebitr7ib7ykk3wjx0njh1qfxqnmt0gb99o2q3b2h4w6fi1pevybesxw5jzgy0ntdi1ux92fvdflgo8qdgqpetgtk03ha7m53b1t4hd81rs37u0mkoaqh8noifbqdryzseoys4bynfbmrdmy8yptooz2btn4x79zgprumwl1nureic4zdb8yynuzcytnmgtmyi02fstj8r37egrrglaqvhe1yelig1wioq7xo70pslv3f1zzgob5ct2maqp5vrncc2fwwqusvccf7ezf482ca6tpnpeut8gkbyz0t98cnf4tw0g6im1yvlmr17e59toi24kp1js35xxzz6qhcedcerktm7flto7b904cpoif5vli7l95ehxpnxxr6mv0785ol1u73wru31lp40g8008qhnvatwdgiphqhh4kxliwl5hs6k5sqok5m3os2cqkouof8t1ujbxzsdgehab6k8sglxepg6mz9armd3wvt4ks02f123jhg5oeq9kz6wulvhmrdn76mikp8cuqfc6wxa0l53u1ha45mv1cnfyo5m0f93i8zs5dq8gjvenah2z1b6ta7hc1rk8jkpp2hvrbg9jxitvy13hko6bmt1wotxqc1t4kom81di3wr3br6isovc42o2wehb2vkhu4h8vyaa5q570qzfrchkcogh5fus7dbvkdnlssnu8t7vkk4oxip7onpo9flc9mtb21iywvjzjmee3nl5r1g0807tp7dbmiqvdzeyok3hh6i4a397k6b8p5sxwx0ps3r5rzamzexiy3u3v3l9gbp12vgkv4iza3cn8h3rz5qp8udot9u5ax1165hkoqwjnpud5q8b7oq6wv9fcuu713vbpwyq5mcze3ef8xbt9blei3gpl8u8zcse2gqlncjnqh4mu4eb15l5kda2pyckpj3tinyvdiari3mc85ubsj76dljsmo4z5h72hvbgnetflqwztheawdrozet8yab3pqumhkc98mk466n7cjoua9xr7irmtf1fspmti0ewxqqflx8h58jfa12g70bytei9nw6mz0bslcuqt0jzb34slu80gmradlf59povxzqashdacxyvh0eowtbvyoohpyr1orfhfn7zluz6ug2js1gs9vy1ebwzff687bbqk81svhj8ilufz6s9qfifc50dgmkocv9fhmzti6cfb68t1o6jq8nlep9df06fnx0smhtq9wz7e098l42dx0lnc0k775w70yj8tn82erf9dexltdp06raonhppjvour2pkpfstvl27hcq8zoyb0zj8hnjvfiu4jsrgchj989y7v80ejvqaagso347zbyolxpw2i3jkl2zzlptvgutkyxnzfr695m5q18pxk2pr79wmsiz4ykgeh6v2ztz7yby36b4vkxsnpb5naw1qy6tx6fcl7d1l2rpdfmql2ak5vkj0o4vuzvfj3ikexmj4lrsl3afhs5vxudmn8jxjlkirbeqg11wrdf2zrsut8ke50z7jxvvtok4ashrq6hn5z0y5cmq48t3ws13qky0xajhixmu9rq3j0zipxj998pr0wdoeewlw6c1p308hg6vh3lorw3newz0k18immpbq4w7wpf1pawvsa3m9fnczeuhfgq280e9da9z4oobg18c2mqsah4zqf3knyvi10yms1tq7t3jyy4czlpny347lg7oz0tt1vja2quoplzdokpjj5dsdhc94h6g3lminon7ui47un9hzf6qbxvc0ccdlivitmym36ysouq5kkq02x3uqxxu41sf4qym3qq6p3i0fz21qopcgrdpnyedii08q5y474jhre78p7b6te9w62t736jwuuq0tcjx68p44cupqzmpqvksa7qtwqce2rsxc17ugkby06rri0bnyjuk6dje6w7s3pmkfklxomz3jxzjdzim1fiwy2jw01vlv95ro2iacagg8otgusqofq063xdgw1cc582s8tyw69d4slou9d4jbrjlehpsrj4dmxm46ia9jfw5ybkm2npurndr3hif7w0tqe62q4gavj8pfrcuy8c2ppkijncvssi8hlm3jr5buqvduc00qg9mhtdvi8pddl83l41t2vbazhommrm238sontxgriokc15a7kz50z29pso1l34j4ggqikzw0wpjmvhag89vqx0lhlyeept8b28cplqzwyh3yxjpvulyoy65hv8gwdqck368ksl32531o60c2yl4hp22l19ejwq1urevpvbuu1j5kke0v8movdvcyddf9j3i4m7ttjlqyusq33nel83ggom4vmurx0cqxd614rc8gzetqpgy28em7voyfuiuep6bgf6u4tbfnq56f1qgzz6e16yh3h0413r1rmalshe37u66iyne3mmk9fh8wrudyv0aly1wyuz9kjzyyt7f8bn8zdn16msa3odgd82uas6 == \2\k\3\8\u\b\9\3\9\x\g\v\2\3\q\e\7\8\v\k\6\2\5\w\g\m\o\2\1\l\1\y\b\g\l\2\2\2\9\1\h\w\g\k\z\f\8\7\k\w\q\3\7\e\z\9\h\h\c\p\o\4\j\q\a\c\m\i\v\p\5\k\t\e\i\n\3\i\3\z\5\j\9\r\y\l\k\e\c\l\q\l\i\7\l\6\d\l\a\r\n\w\5\4\y\6\l\n\w\l\i\c\5\t\z\2\3\t\c\7\g\i\2\6\d\3\u\v\m\6\j\n\a\5\w\a\5\b\g\e\p\t\c\u\h\w\k\7\d\u\6\c\a\e\6\b\r\u\p\v\3\i\z\t\c\o\s\k\v\i\9\a\f\u\x\1\g\4\7\2\2\t\z\k\b\g\f\a\5\6\q\x\m\1\s\u\w\3\q\6\1\a\p\m\p\q\t\d\9\8\v\g\w\q\k\n\9\9\c\s\w\p\5\u\d\z\3\h\l\h\n\g\e\g\5\f\b\j\2\m\d\b\f\u\9\l\5\l\v\8\t\7\9\m\7\j\0\g\5\r\i\f\v\7\l\5\o\i\v\1\h\6\f\g\u\e\m\t\d\x\o\z\d\9\m\r\6\t\a\y\z\1\2\i\7\k\j\9\b\g\0\m\a\i\t\l\q\s\d\k\s\a\w\w\z\w\z\2\x\q\9\h\7\w\3\f\6\1\o\t\g\t\h\8\1\i\f\x\k\p\r\0\u\9\h\t\8\9\u\x\a\q\u\g\z\0\3\9\z\v\a\d\p\n\7\g\2\b\h\2\e\6\1\1\o\p\b\e\b\5\8\5\3\5\a\0\c\s\5\2\3\k\i\x\q\x\8\l\y\y\l\j\u\s\p\9\3\7\v\1\8\d\p\h\e\q\y\2\8\q\t\8\2\h\1\m\b\0\0\8\r\q\j\7\5\y\f\s\o\1\8\h\x\t\m\g\f\d\d\o\t\0\8\i\u\r\f\q\o\f\i\4\v\e\f\a\w\e\9\d\w\o\u\t\3\z\p\d\k\h\x\j\n\c\t\w\w\k\l\s\g\5\k\y\m\i\q\a\j\d\o\r\n\i\d\o\b\m\n\e\n\u\i\i\m\c\8\s\u\m\x\d\h\c\l\j\y\z\t\q\g\7\q\n\2\v\s\6\1\9\g\a\a\l\6\j\3\h\u\s\b\c\g\s\3\2\l\g\j\x\t\y\8\1\7\n\0\z\a\4\d\i\3\v\y\c\w\k\c\t\7\i\y\a\t\t\4\4\r\p\y\b\p\0\a\k\z\e\z\w\a\2\y\6\1\i\c\u\u\l\6\h\2\i\t\j\2\f\6\k\t\e\2\a\n\c\i\w\w\6\e\v\x\t\j\h\x\v\g\d\w\w\f\g\w\h\q\y\w\f\i\e\n\a\y\d\a\b\b\r\q\8\9\h\g\h\s\o\j\e\1\j\5\7\m\1\m\k\9\t\v\j\b\r\o\7\o\z\o\a\u\e\5\j\o\k\p\u\q\h\9\5\w\g\s\9\7\k\4\5\5\k\j\6\d\f\j\m\8\5\a\x\y\d\p\y\c\q\j\e\t\q\0\d\c\d\9\9\x\t\6\p\u\v\e\7\o\r\8\w\o\o\u\w\9\1\3\e\u\l\t\0\m\5\o\x\u\q\m\t\d\d\8\8\2\w\k\s\w\q\6\f\4\x\e\1\q\4\b\z\7\4\w\j\7\y\y\4\w\3\j\s\l\x\1\q\q\u\p\v\z\2\w\6\p\f\6\n\u\p\1\a\e\f\m\b\v\k\w\5\x\j\u\8\y\q\u\x\j\6\u\o\2\s\h\l\q\z\s\t\j\m\a\k\z\c\f\5\b\h\3\d\p\w\i\e\j\h\8\x\j\0\s\y\t\1\z\g\z\q\d\b\g\2\u\r\1\a\n\4\p\2\y\x\6\d\7\e\r\z\1\6\s\x\0\a\b\u\a\c\2\0\p\x\d\r\m\2\f\u\0\g\y\p\i\z\k\c\4\y\j\v\q\6\p\4\t\f\l\w\q\3\0\e\1\4\o\5\3\r\v\x\3\6\u\j\q\x\0\o\m\s\9\z\o\t\1\t\4\e\d\h\3\k\l\n\x\8\a\v\i\c\a\1\2\v\c\7\4\a\j\x\w\h\h\l\r\i\t\s\h\u\r\d\x\j\7\r\q\4\f\z\t\7\n\d\4\0\o\e\c\e\6\g\8\e\q\c\5\x\f\w\r\8\i\h\u\t\8\q\q\e\a\p\i\5\p\r\j\7\7\4\k\n\c\4\c\d\y\8\5\n\a\0\r\q\g\v\6\w\p\h\n\n\k\w\x\i\q\p\k\f\f\3\7\l\9\r\8\o\l\8\6\v\s\4\o\d\d\3\o\7\7\a\g\1\l\c\g\o\o\b\v\k\s\v\v\e\d\f\m\l\n\k\5\3\m\t\l\o\8\v\o\l\o\p\o\r\9\z\6\g\x\2\h\x\5\h\3\y\i\k\t\z\w\g\w\q\d\k\n\0\g\9\w\l\c\c\e\w\z\t\5\z\6\z\0\0\j\s\p\h\g\h\m\y\r\p\9\t\t\i\f\b\v\9\v\s\r\e\m\o\g\b\0\h\r\g\1\y\z\j\9\d\p\n\p\3\3\7\n\6\p\1\0\e\u\3\6\y\x\3\9\y\x\8\x\0\w\y\f\v\7\l\2\l\4\p\r\y\d\k\c\n\i\m\3\u\4\t\d\f\f\v\w\m\3\o\8\p\8\v\g\s\0\h\z\k\1\g\p\q\b\2\4\k\8\e\7\k\n\b\u\a\3\6\p\r\c\1\t\5\6\8\o\a\5\j\m\l\8\8\0\i\j\u\3\z\d\1\1\k\3\d\6\p\d\h\6\p\a\6\l\b\h\s\j\n\8\x\l\g\2\k\f\e\n\g\w\e\x\3\y\1\f\d\u\t\p\9\0\d\d\3\w\v\x\8\o\w\w\c\t\w\2\p\t\8\l\f\7\9\r\q\d\4\w\h\8\p\v\4\8\q\z\q\0\3\n\3\s\6\e\9\p\v\3\h\w\4\y\8\r\y\8\7\x\m\a\p\7\k\l\i\l\0\9\k\2\p\0\4\s\7\3\l\d\g\0\s\1\c\i\2\8\7\2\5\s\o\i\a\h\e\z\n\l\c\i\o\4\n\y\y\b\9\d\1\6\7\y\w\j\w\4\c\s\z\k\5\6\n\b\y\7\3\o\6\l\3\t\2\c\z\6\p\2\d\7\m\w\f\8\2\i\b\n\6\n\3\i\9\z\p\q\1\9\h\o\w\6\i\d\8\0\q\x\3\i\a\7\2\2\g\f\g\z\3\t\a\9\3\5\8\z\c\g\q\k\b\d\c\2\2\f\c\a\n\w\e\j\j\q\o\8\0\u\q\y\k\c\r\c\b\5\s\p\x\0\6\7\u\2\6\e\8\k\a\l\1\1\z\5\y\a\9\y\s\3\z\a\q\3\2\n\2\l\a\y\v\7\7\i\a\2\8\g\h\t\v\n\u\6\d\4\f\y\7\3\n\1\e\g\9\s\w\q\p\3\e\x\o\9\w\p\j\d\o\8\e\b\i\t\r\7\i\b\7\y\k\k\3\w\j\x\0\n\j\h\1\q\f\x\q\n\m\t\0\g\b\9\9\o\2\q\3\b\2\h\4\w\6\f\i\1\p\e\v\y\b\e\s\x\w\5\j\z\g\y\0\n\t\d\i\1\u\x\9\2\f\v\d\f\l\g\o\8\q\d\g\q\p\e\t\g\t\k\0\3\h\a\7\m\5\3\b\1\t\4\h\d\8\1\r\s\3\7\u\0\m\k\o\a\q\h\8\n\o\i\f\b\q\d\r\y\z\s\e\o\y\s\4\b\y\n\f\b\m\r\d\m\y\8\y\p\t\o\o\z\2\b\t\n\4\x\7\9\z\g\p\r\u\m\w\l\1\n\u\r\e\i\c\4\z\d\b\8\y\y\n\u\z\c\y\t\n\m\g\t\m\y\i\0\2\f\s\t\j\8\r\3\7\e\g\r\r\g\l\a\q\v\h\e\1\y\e\l\i\g\1\w\i\o\q\7\x\o\7\0\p\s\l\v\3\f\1\z\z\g\o\b\5\c\t\2\m\a\q\p\5\v\r\n\c\c\2\f\w\w\q\u\s\v\c\c\f\7\e\z\f\4\8\2\c\a\6\t\p\n\p\e\u\t\8\g\k\b\y\z\0\t\9\8\c\n\f\4\t\w\0\g\6\i\m\1\y\v\l\m\r\1\7\e\5\9\t\o\i\2\4\k\p\1\j\s\3\5\x\x\z\z\6\q\h\c\e\d\c\e\r\k\t\m\7\f\l\t\o\7\b\9\0\4\c\p\o\i\f\5\v\l\i\7\l\9\5\e\h\x\p\n\x\x\r\6\m\v\0\7\8\5\o\l\1\u\7\3\w\r\u\3\1\l\p\4\0\g\8\0\0\8\q\h\n\v\a\t\w\d\g\i\p\h\q\h\h\4\k\x\l\i\w\l\5\h\s\6\k\5\s\q\o\k\5\m\3\o\s\2\c\q\k\o\u\o\f\8\t\1\u\j\b\x\z\s\d\g\e\h\a\b\6\k\8\s\g\l\x\e\p\g\6\m\z\9\a\r\m\d\3\w\v\t\4\k\s\0\2\f\1\2\3\j\h\g\5\o\e\q\9\k\z\6\w\u\l\v\h\m\r\d\n\7\6\m\i\k\p\8\c\u\q\f\c\6\w\x\a\0\l\5\3\u\1\h\a\4\5\m\v\1\c\n\f\y\o\5\m\0\f\9\3\i\8\z\s\5\d\q\8\g\j\v\e\n\a\h\2\z\1\b\6\t\a\7\h\c\1\r\k\8\j\k\p\p\2\h\v\r\b\g\9\j\x\i\t\v\y\1\3\h\k\o\6\b\m\t\1\w\o\t\x\q\c\1\t\4\k\o\m\8\1\d\i\3\w\r\3\b\r\6\i\s\o\v\c\4\2\o\2\w\e\h\b\2\v\k\h\u\4\h\8\v\y\a\a\5\q\5\7\0\q\z\f\r\c\h\k\c\o\g\h\5\f\u\s\7\d\b\v\k\d\n\l\s\s\n\u\8\t\7\v\k\k\4\o\x\i\p\7\o\n\p\o\9\f\l\c\9\m\t\b\2\1\i\y\w\v\j\z\j\m\e\e\3\n\l\5\r\1\g\0\8\0\7\t\p\7\d\b\m\i\q\v\d\z\e\y\o\k\3\h\h\6\i\4\a\3\9\7\k\6\b\8\p\5\s\x\w\x\0\p\s\3\r\5\r\z\a\m\z\e\x\i\y\3\u\3\v\3\l\9\g\b\p\1\2\v\g\k\v\4\i\z\a\3\c\n\8\h\3\r\z\5\q\p\8\u\d\o\t\9\u\5\a\x\1\1\6\5\h\k\o\q\w\j\n\p\u\d\5\q\8\b\7\o\q\6\w\v\9\f\c\u\u\7\1\3\v\b\p\w\y\q\5\m\c\z\e\3\e\f\8\x\b\t\9\b\l\e\i\3\g\p\l\8\u\8\z\c\s\e\2\g\q\l\n\c\j\n\q\h\4\m\u\4\e\b\1\5\l\5\k\d\a\2\p\y\c\k\p\j\3\t\i\n\y\v\d\i\a\r\i\3\m\c\8\5\u\b\s\j\7\6\d\l\j\s\m\o\4\z\5\h\7\2\h\v\b\g\n\e\t\f\l\q\w\z\t\h\e\a\w\d\r\o\z\e\t\8\y\a\b\3\p\q\u\m\h\k\c\9\8\m\k\4\6\6\n\7\c\j\o\u\a\9\x\r\7\i\r\m\t\f\1\f\s\p\m\t\i\0\e\w\x\q\q\f\l\x\8\h\5\8\j\f\a\1\2\g\7\0\b\y\t\e\i\9\n\w\6\m\z\0\b\s\l\c\u\q\t\0\j\z\b\3\4\s\l\u\8\0\g\m\r\a\d\l\f\5\9\p\o\v\x\z\q\a\s\h\d\a\c\x\y\v\h\0\e\o\w\t\b\v\y\o\o\h\p\y\r\1\o\r\f\h\f\n\7\z\l\u\z\6\u\g\2\j\s\1\g\s\9\v\y\1\e\b\w\z\f\f\6\8\7\b\b\q\k\8\1\s\v\h\j\8\i\l\u\f\z\6\s\9\q\f\i\f\c\5\0\d\g\m\k\o\c\v\9\f\h\m\z\t\i\6\c\f\b\6\8\t\1\o\6\j\q\8\n\l\e\p\9\d\f\0\6\f\n\x\0\s\m\h\t\q\9\w\z\7\e\0\9\8\l\4\2\d\x\0\l\n\c\0\k\7\7\5\w\7\0\y\j\8\t\n\8\2\e\r\f\9\d\e\x\l\t\d\p\0\6\r\a\o\n\h\p\p\j\v\o\u\r\2\p\k\p\f\s\t\v\l\2\7\h\c\q\8\z\o\y\b\0\z\j\8\h\n\j\v\f\i\u\4\j\s\r\g\c\h\j\9\8\9\y\7\v\8\0\e\j\v\q\a\a\g\s\o\3\4\7\z\b\y\o\l\x\p\w\2\i\3\j\k\l\2\z\z\l\p\t\v\g\u\t\k\y\x\n\z\f\r\6\9\5\m\5\q\1\8\p\x\k\2\p\r\7\9\w\m\s\i\z\4\y\k\g\e\h\6\v\2\z\t\z\7\y\b\y\3\6\b\4\v\k\x\s\n\p\b\5\n\a\w\1\q\y\6\t\x\6\f\c\l\7\d\1\l\2\r\p\d\f\m\q\l\2\a\k\5\v\k\j\0\o\4\v\u\z\v\f\j\3\i\k\e\x\m\j\4\l\r\s\l\3\a\f\h\s\5\v\x\u\d\m\n\8\j\x\j\l\k\i\r\b\e\q\g\1\1\w\r\d\f\2\z\r\s\u\t\8\k\e\5\0\z\7\j\x\v\v\t\o\k\4\a\s\h\r\q\6\h\n\5\z\0\y\5\c\m\q\4\8\t\3\w\s\1\3\q\k\y\0\x\a\j\h\i\x\m\u\9\r\q\3\j\0\z\i\p\x\j\9\9\8\p\r\0\w\d\o\e\e\w\l\w\6\c\1\p\3\0\8\h\g\6\v\h\3\l\o\r\w\3\n\e\w\z\0\k\1\8\i\m\m\p\b\q\4\w\7\w\p\f\1\p\a\w\v\s\a\3\m\9\f\n\c\z\e\u\h\f\g\q\2\8\0\e\9\d\a\9\z\4\o\o\b\g\1\8\c\2\m\q\s\a\h\4\z\q\f\3\k\n\y\v\i\1\0\y\m\s\1\t\q\7\t\3\j\y\y\4\c\z\l\p\n\y\3\4\7\l\g\7\o\z\0\t\t\1\v\j\a\2\q\u\o\p\l\z\d\o\k\p\j\j\5\d\s\d\h\c\9\4\h\6\g\3\l\m\i\n\o\n\7\u\i\4\7\u\n\9\h\z\f\6\q\b\x\v\c\0\c\c\d\l\i\v\i\t\m\y\m\3\6\y\s\o\u\q\5\k\k\q\0\2\x\3\u\q\x\x\u\4\1\s\f\4\q\y\m\3\q\q\6\p\3\i\0\f\z\2\1\q\o\p\c\g\r\d\p\n\y\e\d\i\i\0\8\q\5\y\4\7\4\j\h\r\e\7\8\p\7\b\6\t\e\9\w\6\2\t\7\3\6\j\w\u\u\q\0\t\c\j\x\6\8\p\4\4\c\u\p\q\z\m\p\q\v\k\s\a\7\q\t\w\q\c\e\2\r\s\x\c\1\7\u\g\k\b\y\0\6\r\r\i\0\b\n\y\j\u\k\6\d\j\e\6\w\7\s\3\p\m\k\f\k\l\x\o\m\z\3\j\x\z\j\d\z\i\m\1\f\i\w\y\2\j\w\0\1\v\l\v\9\5\r\o\2\i\a\c\a\g\g\8\o\t\g\u\s\q\o\f\q\0\6\3\x\d\g\w\1\c\c\5\8\2\s\8\t\y\w\6\9\d\4\s\l\o\u\9\d\4\j\b\r\j\l\e\h\p\s\r\j\4\d\m\x\m\4\6\i\a\9\j\f\w\5\y\b\k\m\2\n\p\u\r\n\d\r\3\h\i\f\7\w\0\t\q\e\6\2\q\4\g\a\v\j\8\p\f\r\c\u\y\8\c\2\p\p\k\i\j\n\c\v\s\s\i\8\h\l\m\3\j\r\5\b\u\q\v\d\u\c\0\0\q\g\9\m\h\t\d\v\i\8\p\d\d\l\8\3\l\4\1\t\2\v\b\a\z\h\o\m\m\r\m\2\3\8\s\o\n\t\x\g\r\i\o\k\c\1\5\a\7\k\z\5\0\z\2\9\p\s\o\1\l\3\4\j\4\g\g\q\i\k\z\w\0\w\p\j\m\v\h\a\g\8\9\v\q\x\0\l\h\l\y\e\e\p\t\8\b\2\8\c\p\l\q\z\w\y\h\3\y\x\j\p\v\u\l\y\o\y\6\5\h\v\8\g\w\d\q\c\k\3\6\8\k\s\l\3\2\5\3\1\o\6\0\c\2\y\l\4\h\p\2\2\l\1\9\e\j\w\q\1\u\r\e\v\p\v\b\u\u\1\j\5\k\k\e\0\v\8\m\o\v\d\v\c\y\d\d\f\9\j\3\i\4\m\7\t\t\j\l\q\y\u\s\q\3\3\n\e\l\8\3\g\g\o\m\4\v\m\u\r\x\0\c\q\x\d\6\1\4\r\c\8\g\z\e\t\q\p\g\y\2\8\e\m\7\v\o\y\f\u\i\u\e\p\6\b\g\f\6\u\4\t\b\f\n\q\5\6\f\1\q\g\z\z\6\e\1\6\y\h\3\h\0\4\1\3\r\1\r\m\a\l\s\h\e\3\7\u\6\6\i\y\n\e\3\m\m\k\9\f\h\8\w\r\u\d\y\v\0\a\l\y\1\w\y\u\z\9\k\j\z\y\y\t\7\f\8\b\n\8\z\d\n\1\6\m\s\a\3\o\d\g\d\8\2\u\a\s\6 ]] 00:08:03.914 00:08:03.914 real 0m0.989s 00:08:03.914 user 0m0.655s 00:08:03.914 sys 0m0.207s 00:08:03.914 18:17:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.914 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.175 18:17:02 -- dd/basic_rw.sh@1 -- # cleanup 00:08:04.175 18:17:02 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:04.175 18:17:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:04.175 18:17:02 -- dd/common.sh@11 -- # local nvme_ref= 00:08:04.175 18:17:02 -- dd/common.sh@12 -- # local size=0xffff 00:08:04.175 18:17:02 -- dd/common.sh@14 -- # local bs=1048576 00:08:04.175 18:17:02 -- dd/common.sh@15 -- # local count=1 00:08:04.175 18:17:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:04.175 18:17:02 -- dd/common.sh@18 -- # gen_conf 00:08:04.175 18:17:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.175 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.175 [2024-11-17 18:17:02.264587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.175 [2024-11-17 18:17:02.264702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:08:04.175 { 00:08:04.175 "subsystems": [ 00:08:04.175 { 00:08:04.175 "subsystem": "bdev", 00:08:04.175 "config": [ 00:08:04.175 { 00:08:04.175 "params": { 00:08:04.175 "trtype": "pcie", 00:08:04.175 "traddr": "0000:00:06.0", 00:08:04.175 "name": "Nvme0" 00:08:04.175 }, 00:08:04.175 "method": "bdev_nvme_attach_controller" 00:08:04.175 }, 00:08:04.175 { 00:08:04.175 "method": "bdev_wait_for_examine" 00:08:04.175 } 00:08:04.175 ] 00:08:04.175 } 00:08:04.175 ] 00:08:04.175 } 00:08:04.175 [2024-11-17 18:17:02.401799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.175 [2024-11-17 18:17:02.432032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.435  [2024-11-17T18:17:02.961Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.694 00:08:04.694 18:17:02 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.694 00:08:04.694 real 0m14.254s 00:08:04.694 user 0m10.057s 00:08:04.694 sys 0m2.730s 00:08:04.694 18:17:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.694 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.694 ************************************ 00:08:04.694 END TEST spdk_dd_basic_rw 00:08:04.694 ************************************ 00:08:04.694 18:17:02 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:04.694 18:17:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.694 18:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.694 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.694 ************************************ 00:08:04.694 START TEST spdk_dd_posix 00:08:04.694 ************************************ 00:08:04.694 18:17:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:04.694 * Looking for test storage... 00:08:04.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:04.694 18:17:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:04.694 18:17:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:04.694 18:17:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:04.694 18:17:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:04.694 18:17:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:04.694 18:17:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:04.695 18:17:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:04.695 18:17:02 -- scripts/common.sh@335 -- # IFS=.-: 00:08:04.695 18:17:02 -- scripts/common.sh@335 -- # read -ra ver1 00:08:04.695 18:17:02 -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.695 18:17:02 -- scripts/common.sh@336 -- # read -ra ver2 00:08:04.695 18:17:02 -- scripts/common.sh@337 -- # local 'op=<' 00:08:04.695 18:17:02 -- scripts/common.sh@339 -- # ver1_l=2 00:08:04.695 18:17:02 -- scripts/common.sh@340 -- # ver2_l=1 00:08:04.695 18:17:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:04.695 18:17:02 -- scripts/common.sh@343 -- # case "$op" in 00:08:04.695 18:17:02 -- scripts/common.sh@344 -- # : 1 00:08:04.695 18:17:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:04.695 18:17:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.695 18:17:02 -- scripts/common.sh@364 -- # decimal 1 00:08:04.695 18:17:02 -- scripts/common.sh@352 -- # local d=1 00:08:04.695 18:17:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.695 18:17:02 -- scripts/common.sh@354 -- # echo 1 00:08:04.954 18:17:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:04.954 18:17:02 -- scripts/common.sh@365 -- # decimal 2 00:08:04.954 18:17:02 -- scripts/common.sh@352 -- # local d=2 00:08:04.954 18:17:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.954 18:17:02 -- scripts/common.sh@354 -- # echo 2 00:08:04.954 18:17:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:04.954 18:17:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:04.954 18:17:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:04.954 18:17:02 -- scripts/common.sh@367 -- # return 0 00:08:04.954 18:17:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.954 18:17:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:04.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.954 --rc genhtml_branch_coverage=1 00:08:04.954 --rc genhtml_function_coverage=1 00:08:04.954 --rc genhtml_legend=1 00:08:04.954 --rc geninfo_all_blocks=1 00:08:04.954 --rc geninfo_unexecuted_blocks=1 00:08:04.954 00:08:04.954 ' 00:08:04.954 18:17:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:04.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.954 --rc genhtml_branch_coverage=1 00:08:04.954 --rc genhtml_function_coverage=1 00:08:04.954 --rc genhtml_legend=1 00:08:04.954 --rc geninfo_all_blocks=1 00:08:04.954 --rc geninfo_unexecuted_blocks=1 00:08:04.954 00:08:04.954 ' 00:08:04.954 18:17:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:04.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.954 --rc genhtml_branch_coverage=1 00:08:04.954 --rc genhtml_function_coverage=1 00:08:04.954 --rc genhtml_legend=1 00:08:04.954 --rc geninfo_all_blocks=1 00:08:04.954 --rc geninfo_unexecuted_blocks=1 00:08:04.954 00:08:04.954 ' 00:08:04.954 18:17:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:04.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.954 --rc genhtml_branch_coverage=1 00:08:04.954 --rc genhtml_function_coverage=1 00:08:04.954 --rc genhtml_legend=1 00:08:04.954 --rc geninfo_all_blocks=1 00:08:04.954 --rc geninfo_unexecuted_blocks=1 00:08:04.954 00:08:04.954 ' 00:08:04.954 18:17:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.954 18:17:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.954 18:17:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.954 18:17:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.954 18:17:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.954 18:17:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.954 18:17:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.954 18:17:02 -- paths/export.sh@5 -- # export PATH 00:08:04.954 18:17:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.954 18:17:02 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:04.954 18:17:02 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:04.954 18:17:02 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:04.954 18:17:02 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:04.954 18:17:02 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.954 18:17:02 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.954 18:17:02 -- dd/posix.sh@130 -- # tests 00:08:04.954 18:17:02 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:04.954 * First test run, liburing in use 00:08:04.954 18:17:02 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:04.954 18:17:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.954 18:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.954 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.954 ************************************ 00:08:04.954 START TEST dd_flag_append 00:08:04.954 ************************************ 00:08:04.954 18:17:02 -- common/autotest_common.sh@1114 -- # append 00:08:04.954 18:17:02 -- dd/posix.sh@16 -- # local dump0 00:08:04.954 18:17:02 -- dd/posix.sh@17 -- # local dump1 00:08:04.954 18:17:02 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:04.954 18:17:02 -- dd/common.sh@98 -- # xtrace_disable 00:08:04.954 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.954 18:17:02 -- dd/posix.sh@19 -- # dump0=3729cs1w82nj9nndo5bsdxw4040akcyp 00:08:04.954 18:17:02 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:04.954 18:17:02 -- dd/common.sh@98 -- # xtrace_disable 00:08:04.954 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:04.954 18:17:02 -- dd/posix.sh@20 -- # dump1=0s2w0g69mpri88qjxh0q5wyz2zpfrr6c 00:08:04.955 18:17:02 -- dd/posix.sh@22 -- # printf %s 3729cs1w82nj9nndo5bsdxw4040akcyp 00:08:04.955 18:17:02 -- dd/posix.sh@23 -- # printf %s 0s2w0g69mpri88qjxh0q5wyz2zpfrr6c 00:08:04.955 18:17:02 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:04.955 [2024-11-17 18:17:03.038248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.955 [2024-11-17 18:17:03.038424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69818 ] 00:08:04.955 [2024-11-17 18:17:03.176954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.955 [2024-11-17 18:17:03.217292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.215  [2024-11-17T18:17:03.482Z] Copying: 32/32 [B] (average 31 kBps) 00:08:05.215 00:08:05.215 18:17:03 -- dd/posix.sh@27 -- # [[ 0s2w0g69mpri88qjxh0q5wyz2zpfrr6c3729cs1w82nj9nndo5bsdxw4040akcyp == \0\s\2\w\0\g\6\9\m\p\r\i\8\8\q\j\x\h\0\q\5\w\y\z\2\z\p\f\r\r\6\c\3\7\2\9\c\s\1\w\8\2\n\j\9\n\n\d\o\5\b\s\d\x\w\4\0\4\0\a\k\c\y\p ]] 00:08:05.215 00:08:05.215 real 0m0.471s 00:08:05.215 user 0m0.242s 00:08:05.215 sys 0m0.104s 00:08:05.215 ************************************ 00:08:05.215 END TEST dd_flag_append 00:08:05.215 ************************************ 00:08:05.215 18:17:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.215 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:08:05.475 18:17:03 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:05.475 18:17:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.475 18:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.475 18:17:03 -- common/autotest_common.sh@10 -- # set +x 00:08:05.475 ************************************ 00:08:05.475 START TEST dd_flag_directory 00:08:05.475 ************************************ 00:08:05.475 18:17:03 -- common/autotest_common.sh@1114 -- # directory 00:08:05.475 18:17:03 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.475 18:17:03 -- common/autotest_common.sh@650 -- # local es=0 00:08:05.475 18:17:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.475 18:17:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.475 18:17:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.475 18:17:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.475 18:17:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.475 18:17:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.475 18:17:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.475 18:17:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.475 18:17:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.475 18:17:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.475 [2024-11-17 18:17:03.556424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.475 [2024-11-17 18:17:03.556529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69839 ] 00:08:05.475 [2024-11-17 18:17:03.689364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.475 [2024-11-17 18:17:03.729603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.735 [2024-11-17 18:17:03.778630] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.735 [2024-11-17 18:17:03.778699] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.735 [2024-11-17 18:17:03.778723] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.735 [2024-11-17 18:17:03.842185] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:05.735 18:17:03 -- common/autotest_common.sh@653 -- # es=236 00:08:05.735 18:17:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.735 18:17:03 -- common/autotest_common.sh@662 -- # es=108 00:08:05.735 18:17:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:05.735 18:17:03 -- common/autotest_common.sh@670 -- # es=1 00:08:05.735 18:17:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.735 18:17:03 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.735 18:17:03 -- common/autotest_common.sh@650 -- # local es=0 00:08:05.735 18:17:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.735 18:17:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.735 18:17:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.735 18:17:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.735 18:17:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.735 18:17:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.735 18:17:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.735 18:17:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.735 18:17:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.735 18:17:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.735 [2024-11-17 18:17:03.974775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.735 [2024-11-17 18:17:03.974889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69854 ] 00:08:05.995 [2024-11-17 18:17:04.111020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.995 [2024-11-17 18:17:04.150510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.995 [2024-11-17 18:17:04.199416] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.995 [2024-11-17 18:17:04.199486] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.995 [2024-11-17 18:17:04.199509] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.256 [2024-11-17 18:17:04.262661] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:06.256 18:17:04 -- common/autotest_common.sh@653 -- # es=236 00:08:06.256 18:17:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.256 18:17:04 -- common/autotest_common.sh@662 -- # es=108 00:08:06.256 18:17:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.256 18:17:04 -- common/autotest_common.sh@670 -- # es=1 00:08:06.256 18:17:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.256 00:08:06.256 real 0m0.814s 00:08:06.256 user 0m0.402s 00:08:06.256 sys 0m0.202s 00:08:06.256 ************************************ 00:08:06.256 END TEST dd_flag_directory 00:08:06.256 ************************************ 00:08:06.256 18:17:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.256 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:08:06.256 18:17:04 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:06.256 18:17:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:06.256 18:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.256 18:17:04 -- common/autotest_common.sh@10 -- # set +x 00:08:06.256 ************************************ 00:08:06.256 START TEST dd_flag_nofollow 00:08:06.256 ************************************ 00:08:06.256 18:17:04 -- common/autotest_common.sh@1114 -- # nofollow 00:08:06.256 18:17:04 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.256 18:17:04 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.256 18:17:04 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.256 18:17:04 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.256 18:17:04 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.256 18:17:04 -- common/autotest_common.sh@650 -- # local es=0 00:08:06.256 18:17:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.256 18:17:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.256 18:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.256 18:17:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.256 18:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.256 18:17:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.256 18:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.256 18:17:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.256 18:17:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.256 18:17:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.256 [2024-11-17 18:17:04.433876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.256 [2024-11-17 18:17:04.434019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69877 ] 00:08:06.516 [2024-11-17 18:17:04.570995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.516 [2024-11-17 18:17:04.602885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.516 [2024-11-17 18:17:04.648725] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.516 [2024-11-17 18:17:04.648800] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.516 [2024-11-17 18:17:04.648830] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.516 [2024-11-17 18:17:04.713561] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:06.777 18:17:04 -- common/autotest_common.sh@653 -- # es=216 00:08:06.777 18:17:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.777 18:17:04 -- common/autotest_common.sh@662 -- # es=88 00:08:06.777 18:17:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.777 18:17:04 -- common/autotest_common.sh@670 -- # es=1 00:08:06.777 18:17:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.777 18:17:04 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.777 18:17:04 -- common/autotest_common.sh@650 -- # local es=0 00:08:06.777 18:17:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.777 18:17:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.777 18:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.777 18:17:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.777 18:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.777 18:17:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.777 18:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.777 18:17:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.777 18:17:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.777 18:17:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.777 [2024-11-17 18:17:04.845221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.777 [2024-11-17 18:17:04.845349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69892 ] 00:08:06.777 [2024-11-17 18:17:04.975019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.777 [2024-11-17 18:17:05.007252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.037 [2024-11-17 18:17:05.049290] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:07.037 [2024-11-17 18:17:05.049371] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:07.037 [2024-11-17 18:17:05.049402] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.037 [2024-11-17 18:17:05.105889] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:07.037 18:17:05 -- common/autotest_common.sh@653 -- # es=216 00:08:07.037 18:17:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.037 18:17:05 -- common/autotest_common.sh@662 -- # es=88 00:08:07.037 18:17:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:07.037 18:17:05 -- common/autotest_common.sh@670 -- # es=1 00:08:07.037 18:17:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.037 18:17:05 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:07.037 18:17:05 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.037 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:08:07.037 18:17:05 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.037 [2024-11-17 18:17:05.236935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.037 [2024-11-17 18:17:05.237075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69894 ] 00:08:07.296 [2024-11-17 18:17:05.369522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.296 [2024-11-17 18:17:05.401215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.296  [2024-11-17T18:17:05.823Z] Copying: 512/512 [B] (average 500 kBps) 00:08:07.556 00:08:07.556 18:17:05 -- dd/posix.sh@49 -- # [[ swtkk28ohs41s7sa0tfqdjmp7t2rwnkq07vj28z0krqbbgyu3ndr8wen4kns02mhvw2d2roeya1i3f0azlsynn1tzvqu1ruoa2sr6up6l1rwp5rudfq0e23enyk4hu16ietrtfnxp0jeohtoodwzkum7dsrlpythrk4o3judpkq1hnz8h5mrdbw5yxpl2nhpl02ec9mlxfc0ier05y5j2ditrzgmrjnjepirp1sxp3v5toqkcvmrezgm5etptb90wv01hfa0kcb56pqgjo98a5wl2t7wwupulfexgyu74plvie5ngjkjckz2mhnz5b3fyjumiywrpxyejemm49ru0f4fkbm1qglrsmmiyjuvov4x7qnikvshkjg9ly7ooew5va62ouzom1unx1sa6xyocad4hwgvgys9y7yms5tmwzww8onztndu8aqqfoyzal3jxifu4kgmmgkv5yxi10597xcirfwna4ab71n4o76jtph9brkdy8r1xw3oa0b375m5 == \s\w\t\k\k\2\8\o\h\s\4\1\s\7\s\a\0\t\f\q\d\j\m\p\7\t\2\r\w\n\k\q\0\7\v\j\2\8\z\0\k\r\q\b\b\g\y\u\3\n\d\r\8\w\e\n\4\k\n\s\0\2\m\h\v\w\2\d\2\r\o\e\y\a\1\i\3\f\0\a\z\l\s\y\n\n\1\t\z\v\q\u\1\r\u\o\a\2\s\r\6\u\p\6\l\1\r\w\p\5\r\u\d\f\q\0\e\2\3\e\n\y\k\4\h\u\1\6\i\e\t\r\t\f\n\x\p\0\j\e\o\h\t\o\o\d\w\z\k\u\m\7\d\s\r\l\p\y\t\h\r\k\4\o\3\j\u\d\p\k\q\1\h\n\z\8\h\5\m\r\d\b\w\5\y\x\p\l\2\n\h\p\l\0\2\e\c\9\m\l\x\f\c\0\i\e\r\0\5\y\5\j\2\d\i\t\r\z\g\m\r\j\n\j\e\p\i\r\p\1\s\x\p\3\v\5\t\o\q\k\c\v\m\r\e\z\g\m\5\e\t\p\t\b\9\0\w\v\0\1\h\f\a\0\k\c\b\5\6\p\q\g\j\o\9\8\a\5\w\l\2\t\7\w\w\u\p\u\l\f\e\x\g\y\u\7\4\p\l\v\i\e\5\n\g\j\k\j\c\k\z\2\m\h\n\z\5\b\3\f\y\j\u\m\i\y\w\r\p\x\y\e\j\e\m\m\4\9\r\u\0\f\4\f\k\b\m\1\q\g\l\r\s\m\m\i\y\j\u\v\o\v\4\x\7\q\n\i\k\v\s\h\k\j\g\9\l\y\7\o\o\e\w\5\v\a\6\2\o\u\z\o\m\1\u\n\x\1\s\a\6\x\y\o\c\a\d\4\h\w\g\v\g\y\s\9\y\7\y\m\s\5\t\m\w\z\w\w\8\o\n\z\t\n\d\u\8\a\q\q\f\o\y\z\a\l\3\j\x\i\f\u\4\k\g\m\m\g\k\v\5\y\x\i\1\0\5\9\7\x\c\i\r\f\w\n\a\4\a\b\7\1\n\4\o\7\6\j\t\p\h\9\b\r\k\d\y\8\r\1\x\w\3\o\a\0\b\3\7\5\m\5 ]] 00:08:07.556 00:08:07.556 real 0m1.218s 00:08:07.556 user 0m0.615s 00:08:07.556 sys 0m0.276s 00:08:07.556 ************************************ 00:08:07.556 END TEST dd_flag_nofollow 00:08:07.556 ************************************ 00:08:07.556 18:17:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.556 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 18:17:05 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:07.556 18:17:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:07.556 18:17:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.556 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 ************************************ 00:08:07.556 START TEST dd_flag_noatime 00:08:07.556 ************************************ 00:08:07.556 18:17:05 -- common/autotest_common.sh@1114 -- # noatime 00:08:07.556 18:17:05 -- dd/posix.sh@53 -- # local atime_if 00:08:07.556 18:17:05 -- dd/posix.sh@54 -- # local atime_of 00:08:07.556 18:17:05 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:07.556 18:17:05 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.556 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:08:07.556 18:17:05 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.556 18:17:05 -- dd/posix.sh@60 -- # atime_if=1731867425 00:08:07.556 18:17:05 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.556 18:17:05 -- dd/posix.sh@61 -- # atime_of=1731867425 00:08:07.556 18:17:05 -- dd/posix.sh@66 -- # sleep 1 00:08:08.496 18:17:06 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.496 [2024-11-17 18:17:06.731964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:08.496 [2024-11-17 18:17:06.732091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69935 ] 00:08:08.756 [2024-11-17 18:17:06.871561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.756 [2024-11-17 18:17:06.912474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.756  [2024-11-17T18:17:07.283Z] Copying: 512/512 [B] (average 500 kBps) 00:08:09.016 00:08:09.016 18:17:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:09.016 18:17:07 -- dd/posix.sh@69 -- # (( atime_if == 1731867425 )) 00:08:09.016 18:17:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.016 18:17:07 -- dd/posix.sh@70 -- # (( atime_of == 1731867425 )) 00:08:09.016 18:17:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.016 [2024-11-17 18:17:07.202105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.016 [2024-11-17 18:17:07.202243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69946 ] 00:08:09.276 [2024-11-17 18:17:07.339481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.276 [2024-11-17 18:17:07.372991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.276  [2024-11-17T18:17:07.803Z] Copying: 512/512 [B] (average 500 kBps) 00:08:09.536 00:08:09.536 18:17:07 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:09.536 18:17:07 -- dd/posix.sh@73 -- # (( atime_if < 1731867427 )) 00:08:09.536 00:08:09.536 real 0m1.927s 00:08:09.536 user 0m0.464s 00:08:09.536 sys 0m0.216s 00:08:09.536 ************************************ 00:08:09.536 END TEST dd_flag_noatime 00:08:09.536 ************************************ 00:08:09.536 18:17:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.536 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:08:09.536 18:17:07 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:09.536 18:17:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.536 18:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.536 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:08:09.536 ************************************ 00:08:09.536 START TEST dd_flags_misc 00:08:09.536 ************************************ 00:08:09.536 18:17:07 -- common/autotest_common.sh@1114 -- # io 00:08:09.536 18:17:07 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:09.536 18:17:07 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:09.536 18:17:07 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:09.536 18:17:07 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:09.536 18:17:07 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:09.536 18:17:07 -- dd/common.sh@98 -- # xtrace_disable 00:08:09.536 18:17:07 -- common/autotest_common.sh@10 -- # set +x 00:08:09.536 18:17:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:09.536 18:17:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:09.536 [2024-11-17 18:17:07.696904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.536 [2024-11-17 18:17:07.697040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69967 ] 00:08:09.795 [2024-11-17 18:17:07.835424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.795 [2024-11-17 18:17:07.868094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.795  [2024-11-17T18:17:08.322Z] Copying: 512/512 [B] (average 500 kBps) 00:08:10.055 00:08:10.055 18:17:08 -- dd/posix.sh@93 -- # [[ g4qaig8vlc2z38fxs5tf0dif1ezy9f4izretbtyuhdvgaychj3wx47ia78rdvz12gthb7e18qnbq87zmexf8qp31hmibotyzagwv83ij5dficvmiefobk3o7cx7u976o5lp00icsczk7va2n2i85wp11uc8wp3aqu38fjsrlojs5szswc4vpw34nybmc6tzhrlq8rtehl4uinnjhqk4nz00uz1wc8ih4awyr82k6vq9k1y32ictj9ni3fvzg44g46reodvvt6v0j4wubd1emyd0vxji2euhs1vni05su58devlaud2jms372g116476x09qe9turqqr6mj32iizg6v78pn72m66f1mp92vecn51apuo6cu5b8q7powmzs0lyno1oy63ljs471t0q60seax29t6gsgw93z9bmgpslkvhro0q0f9xn3a1gnn79lrptqy2yq8kbqufu4lrvzpjlxltshrjjolfq5xwv4l6kcrlqza0bfyneuzeul2euipfx == \g\4\q\a\i\g\8\v\l\c\2\z\3\8\f\x\s\5\t\f\0\d\i\f\1\e\z\y\9\f\4\i\z\r\e\t\b\t\y\u\h\d\v\g\a\y\c\h\j\3\w\x\4\7\i\a\7\8\r\d\v\z\1\2\g\t\h\b\7\e\1\8\q\n\b\q\8\7\z\m\e\x\f\8\q\p\3\1\h\m\i\b\o\t\y\z\a\g\w\v\8\3\i\j\5\d\f\i\c\v\m\i\e\f\o\b\k\3\o\7\c\x\7\u\9\7\6\o\5\l\p\0\0\i\c\s\c\z\k\7\v\a\2\n\2\i\8\5\w\p\1\1\u\c\8\w\p\3\a\q\u\3\8\f\j\s\r\l\o\j\s\5\s\z\s\w\c\4\v\p\w\3\4\n\y\b\m\c\6\t\z\h\r\l\q\8\r\t\e\h\l\4\u\i\n\n\j\h\q\k\4\n\z\0\0\u\z\1\w\c\8\i\h\4\a\w\y\r\8\2\k\6\v\q\9\k\1\y\3\2\i\c\t\j\9\n\i\3\f\v\z\g\4\4\g\4\6\r\e\o\d\v\v\t\6\v\0\j\4\w\u\b\d\1\e\m\y\d\0\v\x\j\i\2\e\u\h\s\1\v\n\i\0\5\s\u\5\8\d\e\v\l\a\u\d\2\j\m\s\3\7\2\g\1\1\6\4\7\6\x\0\9\q\e\9\t\u\r\q\q\r\6\m\j\3\2\i\i\z\g\6\v\7\8\p\n\7\2\m\6\6\f\1\m\p\9\2\v\e\c\n\5\1\a\p\u\o\6\c\u\5\b\8\q\7\p\o\w\m\z\s\0\l\y\n\o\1\o\y\6\3\l\j\s\4\7\1\t\0\q\6\0\s\e\a\x\2\9\t\6\g\s\g\w\9\3\z\9\b\m\g\p\s\l\k\v\h\r\o\0\q\0\f\9\x\n\3\a\1\g\n\n\7\9\l\r\p\t\q\y\2\y\q\8\k\b\q\u\f\u\4\l\r\v\z\p\j\l\x\l\t\s\h\r\j\j\o\l\f\q\5\x\w\v\4\l\6\k\c\r\l\q\z\a\0\b\f\y\n\e\u\z\e\u\l\2\e\u\i\p\f\x ]] 00:08:10.055 18:17:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.055 18:17:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:10.055 [2024-11-17 18:17:08.119088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.055 [2024-11-17 18:17:08.119207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69980 ] 00:08:10.055 [2024-11-17 18:17:08.248006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.055 [2024-11-17 18:17:08.281009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.315  [2024-11-17T18:17:08.582Z] Copying: 512/512 [B] (average 500 kBps) 00:08:10.315 00:08:10.315 18:17:08 -- dd/posix.sh@93 -- # [[ g4qaig8vlc2z38fxs5tf0dif1ezy9f4izretbtyuhdvgaychj3wx47ia78rdvz12gthb7e18qnbq87zmexf8qp31hmibotyzagwv83ij5dficvmiefobk3o7cx7u976o5lp00icsczk7va2n2i85wp11uc8wp3aqu38fjsrlojs5szswc4vpw34nybmc6tzhrlq8rtehl4uinnjhqk4nz00uz1wc8ih4awyr82k6vq9k1y32ictj9ni3fvzg44g46reodvvt6v0j4wubd1emyd0vxji2euhs1vni05su58devlaud2jms372g116476x09qe9turqqr6mj32iizg6v78pn72m66f1mp92vecn51apuo6cu5b8q7powmzs0lyno1oy63ljs471t0q60seax29t6gsgw93z9bmgpslkvhro0q0f9xn3a1gnn79lrptqy2yq8kbqufu4lrvzpjlxltshrjjolfq5xwv4l6kcrlqza0bfyneuzeul2euipfx == \g\4\q\a\i\g\8\v\l\c\2\z\3\8\f\x\s\5\t\f\0\d\i\f\1\e\z\y\9\f\4\i\z\r\e\t\b\t\y\u\h\d\v\g\a\y\c\h\j\3\w\x\4\7\i\a\7\8\r\d\v\z\1\2\g\t\h\b\7\e\1\8\q\n\b\q\8\7\z\m\e\x\f\8\q\p\3\1\h\m\i\b\o\t\y\z\a\g\w\v\8\3\i\j\5\d\f\i\c\v\m\i\e\f\o\b\k\3\o\7\c\x\7\u\9\7\6\o\5\l\p\0\0\i\c\s\c\z\k\7\v\a\2\n\2\i\8\5\w\p\1\1\u\c\8\w\p\3\a\q\u\3\8\f\j\s\r\l\o\j\s\5\s\z\s\w\c\4\v\p\w\3\4\n\y\b\m\c\6\t\z\h\r\l\q\8\r\t\e\h\l\4\u\i\n\n\j\h\q\k\4\n\z\0\0\u\z\1\w\c\8\i\h\4\a\w\y\r\8\2\k\6\v\q\9\k\1\y\3\2\i\c\t\j\9\n\i\3\f\v\z\g\4\4\g\4\6\r\e\o\d\v\v\t\6\v\0\j\4\w\u\b\d\1\e\m\y\d\0\v\x\j\i\2\e\u\h\s\1\v\n\i\0\5\s\u\5\8\d\e\v\l\a\u\d\2\j\m\s\3\7\2\g\1\1\6\4\7\6\x\0\9\q\e\9\t\u\r\q\q\r\6\m\j\3\2\i\i\z\g\6\v\7\8\p\n\7\2\m\6\6\f\1\m\p\9\2\v\e\c\n\5\1\a\p\u\o\6\c\u\5\b\8\q\7\p\o\w\m\z\s\0\l\y\n\o\1\o\y\6\3\l\j\s\4\7\1\t\0\q\6\0\s\e\a\x\2\9\t\6\g\s\g\w\9\3\z\9\b\m\g\p\s\l\k\v\h\r\o\0\q\0\f\9\x\n\3\a\1\g\n\n\7\9\l\r\p\t\q\y\2\y\q\8\k\b\q\u\f\u\4\l\r\v\z\p\j\l\x\l\t\s\h\r\j\j\o\l\f\q\5\x\w\v\4\l\6\k\c\r\l\q\z\a\0\b\f\y\n\e\u\z\e\u\l\2\e\u\i\p\f\x ]] 00:08:10.315 18:17:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.315 18:17:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:10.315 [2024-11-17 18:17:08.520917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.315 [2024-11-17 18:17:08.521024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69982 ] 00:08:10.575 [2024-11-17 18:17:08.656185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.575 [2024-11-17 18:17:08.687673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.575  [2024-11-17T18:17:09.102Z] Copying: 512/512 [B] (average 166 kBps) 00:08:10.835 00:08:10.835 18:17:08 -- dd/posix.sh@93 -- # [[ g4qaig8vlc2z38fxs5tf0dif1ezy9f4izretbtyuhdvgaychj3wx47ia78rdvz12gthb7e18qnbq87zmexf8qp31hmibotyzagwv83ij5dficvmiefobk3o7cx7u976o5lp00icsczk7va2n2i85wp11uc8wp3aqu38fjsrlojs5szswc4vpw34nybmc6tzhrlq8rtehl4uinnjhqk4nz00uz1wc8ih4awyr82k6vq9k1y32ictj9ni3fvzg44g46reodvvt6v0j4wubd1emyd0vxji2euhs1vni05su58devlaud2jms372g116476x09qe9turqqr6mj32iizg6v78pn72m66f1mp92vecn51apuo6cu5b8q7powmzs0lyno1oy63ljs471t0q60seax29t6gsgw93z9bmgpslkvhro0q0f9xn3a1gnn79lrptqy2yq8kbqufu4lrvzpjlxltshrjjolfq5xwv4l6kcrlqza0bfyneuzeul2euipfx == \g\4\q\a\i\g\8\v\l\c\2\z\3\8\f\x\s\5\t\f\0\d\i\f\1\e\z\y\9\f\4\i\z\r\e\t\b\t\y\u\h\d\v\g\a\y\c\h\j\3\w\x\4\7\i\a\7\8\r\d\v\z\1\2\g\t\h\b\7\e\1\8\q\n\b\q\8\7\z\m\e\x\f\8\q\p\3\1\h\m\i\b\o\t\y\z\a\g\w\v\8\3\i\j\5\d\f\i\c\v\m\i\e\f\o\b\k\3\o\7\c\x\7\u\9\7\6\o\5\l\p\0\0\i\c\s\c\z\k\7\v\a\2\n\2\i\8\5\w\p\1\1\u\c\8\w\p\3\a\q\u\3\8\f\j\s\r\l\o\j\s\5\s\z\s\w\c\4\v\p\w\3\4\n\y\b\m\c\6\t\z\h\r\l\q\8\r\t\e\h\l\4\u\i\n\n\j\h\q\k\4\n\z\0\0\u\z\1\w\c\8\i\h\4\a\w\y\r\8\2\k\6\v\q\9\k\1\y\3\2\i\c\t\j\9\n\i\3\f\v\z\g\4\4\g\4\6\r\e\o\d\v\v\t\6\v\0\j\4\w\u\b\d\1\e\m\y\d\0\v\x\j\i\2\e\u\h\s\1\v\n\i\0\5\s\u\5\8\d\e\v\l\a\u\d\2\j\m\s\3\7\2\g\1\1\6\4\7\6\x\0\9\q\e\9\t\u\r\q\q\r\6\m\j\3\2\i\i\z\g\6\v\7\8\p\n\7\2\m\6\6\f\1\m\p\9\2\v\e\c\n\5\1\a\p\u\o\6\c\u\5\b\8\q\7\p\o\w\m\z\s\0\l\y\n\o\1\o\y\6\3\l\j\s\4\7\1\t\0\q\6\0\s\e\a\x\2\9\t\6\g\s\g\w\9\3\z\9\b\m\g\p\s\l\k\v\h\r\o\0\q\0\f\9\x\n\3\a\1\g\n\n\7\9\l\r\p\t\q\y\2\y\q\8\k\b\q\u\f\u\4\l\r\v\z\p\j\l\x\l\t\s\h\r\j\j\o\l\f\q\5\x\w\v\4\l\6\k\c\r\l\q\z\a\0\b\f\y\n\e\u\z\e\u\l\2\e\u\i\p\f\x ]] 00:08:10.835 18:17:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.835 18:17:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:10.835 [2024-11-17 18:17:08.942107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.835 [2024-11-17 18:17:08.942467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69990 ] 00:08:10.835 [2024-11-17 18:17:09.090801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.097 [2024-11-17 18:17:09.133346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.097  [2024-11-17T18:17:09.364Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.097 00:08:11.418 18:17:09 -- dd/posix.sh@93 -- # [[ g4qaig8vlc2z38fxs5tf0dif1ezy9f4izretbtyuhdvgaychj3wx47ia78rdvz12gthb7e18qnbq87zmexf8qp31hmibotyzagwv83ij5dficvmiefobk3o7cx7u976o5lp00icsczk7va2n2i85wp11uc8wp3aqu38fjsrlojs5szswc4vpw34nybmc6tzhrlq8rtehl4uinnjhqk4nz00uz1wc8ih4awyr82k6vq9k1y32ictj9ni3fvzg44g46reodvvt6v0j4wubd1emyd0vxji2euhs1vni05su58devlaud2jms372g116476x09qe9turqqr6mj32iizg6v78pn72m66f1mp92vecn51apuo6cu5b8q7powmzs0lyno1oy63ljs471t0q60seax29t6gsgw93z9bmgpslkvhro0q0f9xn3a1gnn79lrptqy2yq8kbqufu4lrvzpjlxltshrjjolfq5xwv4l6kcrlqza0bfyneuzeul2euipfx == \g\4\q\a\i\g\8\v\l\c\2\z\3\8\f\x\s\5\t\f\0\d\i\f\1\e\z\y\9\f\4\i\z\r\e\t\b\t\y\u\h\d\v\g\a\y\c\h\j\3\w\x\4\7\i\a\7\8\r\d\v\z\1\2\g\t\h\b\7\e\1\8\q\n\b\q\8\7\z\m\e\x\f\8\q\p\3\1\h\m\i\b\o\t\y\z\a\g\w\v\8\3\i\j\5\d\f\i\c\v\m\i\e\f\o\b\k\3\o\7\c\x\7\u\9\7\6\o\5\l\p\0\0\i\c\s\c\z\k\7\v\a\2\n\2\i\8\5\w\p\1\1\u\c\8\w\p\3\a\q\u\3\8\f\j\s\r\l\o\j\s\5\s\z\s\w\c\4\v\p\w\3\4\n\y\b\m\c\6\t\z\h\r\l\q\8\r\t\e\h\l\4\u\i\n\n\j\h\q\k\4\n\z\0\0\u\z\1\w\c\8\i\h\4\a\w\y\r\8\2\k\6\v\q\9\k\1\y\3\2\i\c\t\j\9\n\i\3\f\v\z\g\4\4\g\4\6\r\e\o\d\v\v\t\6\v\0\j\4\w\u\b\d\1\e\m\y\d\0\v\x\j\i\2\e\u\h\s\1\v\n\i\0\5\s\u\5\8\d\e\v\l\a\u\d\2\j\m\s\3\7\2\g\1\1\6\4\7\6\x\0\9\q\e\9\t\u\r\q\q\r\6\m\j\3\2\i\i\z\g\6\v\7\8\p\n\7\2\m\6\6\f\1\m\p\9\2\v\e\c\n\5\1\a\p\u\o\6\c\u\5\b\8\q\7\p\o\w\m\z\s\0\l\y\n\o\1\o\y\6\3\l\j\s\4\7\1\t\0\q\6\0\s\e\a\x\2\9\t\6\g\s\g\w\9\3\z\9\b\m\g\p\s\l\k\v\h\r\o\0\q\0\f\9\x\n\3\a\1\g\n\n\7\9\l\r\p\t\q\y\2\y\q\8\k\b\q\u\f\u\4\l\r\v\z\p\j\l\x\l\t\s\h\r\j\j\o\l\f\q\5\x\w\v\4\l\6\k\c\r\l\q\z\a\0\b\f\y\n\e\u\z\e\u\l\2\e\u\i\p\f\x ]] 00:08:11.418 18:17:09 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:11.418 18:17:09 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:11.418 18:17:09 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.418 18:17:09 -- common/autotest_common.sh@10 -- # set +x 00:08:11.418 18:17:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.418 18:17:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:11.418 [2024-11-17 18:17:09.420706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:11.418 [2024-11-17 18:17:09.420880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69997 ] 00:08:11.418 [2024-11-17 18:17:09.558636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.418 [2024-11-17 18:17:09.602074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.418  [2024-11-17T18:17:09.965Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.698 00:08:11.698 18:17:09 -- dd/posix.sh@93 -- # [[ 7ipo2vwxp3q7g8yl18qz8khetx6q7xfg5fes2d9fqyvl258ucgz0nj9muj3znjrtjcw4hb3765try4acnp6vf4swl2eoew89eqhhgv5puhmo3djxfh8t642i9jm34qr56eb42w2g22p7lnxitoy7pf6ih2qn4xrsi2j7ssevi3pmh5vk2vw891budapwkaxg3zmrexzg2bs4k3g6i2jbuozfy8a7120hbrjwwl6w26cq0hli4qt848ew0xcxb7u3dckf2nzdesmxj1ajuewadejkubrgzs4bq10ru36yvql6qweyorrg8lyao1cvom78s5u9wueap8xdbnyg6wasj55pma9w3dde5xbwmd033ulavfjhoif1b2qp6njumi64ykf8mpdeffe777o0ui927omgv8t6wy7ooccu5w30wllacjd7rwx5q1jtmlviwfa2nqu2bo3g1y56ycxknm4m46v2lxzcv0lfbikmmehcpygy8gblw0ywfezcd7h8qofu == \7\i\p\o\2\v\w\x\p\3\q\7\g\8\y\l\1\8\q\z\8\k\h\e\t\x\6\q\7\x\f\g\5\f\e\s\2\d\9\f\q\y\v\l\2\5\8\u\c\g\z\0\n\j\9\m\u\j\3\z\n\j\r\t\j\c\w\4\h\b\3\7\6\5\t\r\y\4\a\c\n\p\6\v\f\4\s\w\l\2\e\o\e\w\8\9\e\q\h\h\g\v\5\p\u\h\m\o\3\d\j\x\f\h\8\t\6\4\2\i\9\j\m\3\4\q\r\5\6\e\b\4\2\w\2\g\2\2\p\7\l\n\x\i\t\o\y\7\p\f\6\i\h\2\q\n\4\x\r\s\i\2\j\7\s\s\e\v\i\3\p\m\h\5\v\k\2\v\w\8\9\1\b\u\d\a\p\w\k\a\x\g\3\z\m\r\e\x\z\g\2\b\s\4\k\3\g\6\i\2\j\b\u\o\z\f\y\8\a\7\1\2\0\h\b\r\j\w\w\l\6\w\2\6\c\q\0\h\l\i\4\q\t\8\4\8\e\w\0\x\c\x\b\7\u\3\d\c\k\f\2\n\z\d\e\s\m\x\j\1\a\j\u\e\w\a\d\e\j\k\u\b\r\g\z\s\4\b\q\1\0\r\u\3\6\y\v\q\l\6\q\w\e\y\o\r\r\g\8\l\y\a\o\1\c\v\o\m\7\8\s\5\u\9\w\u\e\a\p\8\x\d\b\n\y\g\6\w\a\s\j\5\5\p\m\a\9\w\3\d\d\e\5\x\b\w\m\d\0\3\3\u\l\a\v\f\j\h\o\i\f\1\b\2\q\p\6\n\j\u\m\i\6\4\y\k\f\8\m\p\d\e\f\f\e\7\7\7\o\0\u\i\9\2\7\o\m\g\v\8\t\6\w\y\7\o\o\c\c\u\5\w\3\0\w\l\l\a\c\j\d\7\r\w\x\5\q\1\j\t\m\l\v\i\w\f\a\2\n\q\u\2\b\o\3\g\1\y\5\6\y\c\x\k\n\m\4\m\4\6\v\2\l\x\z\c\v\0\l\f\b\i\k\m\m\e\h\c\p\y\g\y\8\g\b\l\w\0\y\w\f\e\z\c\d\7\h\8\q\o\f\u ]] 00:08:11.698 18:17:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.698 18:17:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:11.698 [2024-11-17 18:17:09.876378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:11.698 [2024-11-17 18:17:09.876524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70005 ] 00:08:11.957 [2024-11-17 18:17:10.015680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.957 [2024-11-17 18:17:10.058906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.957  [2024-11-17T18:17:10.484Z] Copying: 512/512 [B] (average 500 kBps) 00:08:12.217 00:08:12.217 18:17:10 -- dd/posix.sh@93 -- # [[ 7ipo2vwxp3q7g8yl18qz8khetx6q7xfg5fes2d9fqyvl258ucgz0nj9muj3znjrtjcw4hb3765try4acnp6vf4swl2eoew89eqhhgv5puhmo3djxfh8t642i9jm34qr56eb42w2g22p7lnxitoy7pf6ih2qn4xrsi2j7ssevi3pmh5vk2vw891budapwkaxg3zmrexzg2bs4k3g6i2jbuozfy8a7120hbrjwwl6w26cq0hli4qt848ew0xcxb7u3dckf2nzdesmxj1ajuewadejkubrgzs4bq10ru36yvql6qweyorrg8lyao1cvom78s5u9wueap8xdbnyg6wasj55pma9w3dde5xbwmd033ulavfjhoif1b2qp6njumi64ykf8mpdeffe777o0ui927omgv8t6wy7ooccu5w30wllacjd7rwx5q1jtmlviwfa2nqu2bo3g1y56ycxknm4m46v2lxzcv0lfbikmmehcpygy8gblw0ywfezcd7h8qofu == \7\i\p\o\2\v\w\x\p\3\q\7\g\8\y\l\1\8\q\z\8\k\h\e\t\x\6\q\7\x\f\g\5\f\e\s\2\d\9\f\q\y\v\l\2\5\8\u\c\g\z\0\n\j\9\m\u\j\3\z\n\j\r\t\j\c\w\4\h\b\3\7\6\5\t\r\y\4\a\c\n\p\6\v\f\4\s\w\l\2\e\o\e\w\8\9\e\q\h\h\g\v\5\p\u\h\m\o\3\d\j\x\f\h\8\t\6\4\2\i\9\j\m\3\4\q\r\5\6\e\b\4\2\w\2\g\2\2\p\7\l\n\x\i\t\o\y\7\p\f\6\i\h\2\q\n\4\x\r\s\i\2\j\7\s\s\e\v\i\3\p\m\h\5\v\k\2\v\w\8\9\1\b\u\d\a\p\w\k\a\x\g\3\z\m\r\e\x\z\g\2\b\s\4\k\3\g\6\i\2\j\b\u\o\z\f\y\8\a\7\1\2\0\h\b\r\j\w\w\l\6\w\2\6\c\q\0\h\l\i\4\q\t\8\4\8\e\w\0\x\c\x\b\7\u\3\d\c\k\f\2\n\z\d\e\s\m\x\j\1\a\j\u\e\w\a\d\e\j\k\u\b\r\g\z\s\4\b\q\1\0\r\u\3\6\y\v\q\l\6\q\w\e\y\o\r\r\g\8\l\y\a\o\1\c\v\o\m\7\8\s\5\u\9\w\u\e\a\p\8\x\d\b\n\y\g\6\w\a\s\j\5\5\p\m\a\9\w\3\d\d\e\5\x\b\w\m\d\0\3\3\u\l\a\v\f\j\h\o\i\f\1\b\2\q\p\6\n\j\u\m\i\6\4\y\k\f\8\m\p\d\e\f\f\e\7\7\7\o\0\u\i\9\2\7\o\m\g\v\8\t\6\w\y\7\o\o\c\c\u\5\w\3\0\w\l\l\a\c\j\d\7\r\w\x\5\q\1\j\t\m\l\v\i\w\f\a\2\n\q\u\2\b\o\3\g\1\y\5\6\y\c\x\k\n\m\4\m\4\6\v\2\l\x\z\c\v\0\l\f\b\i\k\m\m\e\h\c\p\y\g\y\8\g\b\l\w\0\y\w\f\e\z\c\d\7\h\8\q\o\f\u ]] 00:08:12.217 18:17:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.217 18:17:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:12.217 [2024-11-17 18:17:10.328036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.217 [2024-11-17 18:17:10.328181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70012 ] 00:08:12.217 [2024-11-17 18:17:10.465565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.477 [2024-11-17 18:17:10.499994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.477  [2024-11-17T18:17:10.744Z] Copying: 512/512 [B] (average 500 kBps) 00:08:12.477 00:08:12.477 18:17:10 -- dd/posix.sh@93 -- # [[ 7ipo2vwxp3q7g8yl18qz8khetx6q7xfg5fes2d9fqyvl258ucgz0nj9muj3znjrtjcw4hb3765try4acnp6vf4swl2eoew89eqhhgv5puhmo3djxfh8t642i9jm34qr56eb42w2g22p7lnxitoy7pf6ih2qn4xrsi2j7ssevi3pmh5vk2vw891budapwkaxg3zmrexzg2bs4k3g6i2jbuozfy8a7120hbrjwwl6w26cq0hli4qt848ew0xcxb7u3dckf2nzdesmxj1ajuewadejkubrgzs4bq10ru36yvql6qweyorrg8lyao1cvom78s5u9wueap8xdbnyg6wasj55pma9w3dde5xbwmd033ulavfjhoif1b2qp6njumi64ykf8mpdeffe777o0ui927omgv8t6wy7ooccu5w30wllacjd7rwx5q1jtmlviwfa2nqu2bo3g1y56ycxknm4m46v2lxzcv0lfbikmmehcpygy8gblw0ywfezcd7h8qofu == \7\i\p\o\2\v\w\x\p\3\q\7\g\8\y\l\1\8\q\z\8\k\h\e\t\x\6\q\7\x\f\g\5\f\e\s\2\d\9\f\q\y\v\l\2\5\8\u\c\g\z\0\n\j\9\m\u\j\3\z\n\j\r\t\j\c\w\4\h\b\3\7\6\5\t\r\y\4\a\c\n\p\6\v\f\4\s\w\l\2\e\o\e\w\8\9\e\q\h\h\g\v\5\p\u\h\m\o\3\d\j\x\f\h\8\t\6\4\2\i\9\j\m\3\4\q\r\5\6\e\b\4\2\w\2\g\2\2\p\7\l\n\x\i\t\o\y\7\p\f\6\i\h\2\q\n\4\x\r\s\i\2\j\7\s\s\e\v\i\3\p\m\h\5\v\k\2\v\w\8\9\1\b\u\d\a\p\w\k\a\x\g\3\z\m\r\e\x\z\g\2\b\s\4\k\3\g\6\i\2\j\b\u\o\z\f\y\8\a\7\1\2\0\h\b\r\j\w\w\l\6\w\2\6\c\q\0\h\l\i\4\q\t\8\4\8\e\w\0\x\c\x\b\7\u\3\d\c\k\f\2\n\z\d\e\s\m\x\j\1\a\j\u\e\w\a\d\e\j\k\u\b\r\g\z\s\4\b\q\1\0\r\u\3\6\y\v\q\l\6\q\w\e\y\o\r\r\g\8\l\y\a\o\1\c\v\o\m\7\8\s\5\u\9\w\u\e\a\p\8\x\d\b\n\y\g\6\w\a\s\j\5\5\p\m\a\9\w\3\d\d\e\5\x\b\w\m\d\0\3\3\u\l\a\v\f\j\h\o\i\f\1\b\2\q\p\6\n\j\u\m\i\6\4\y\k\f\8\m\p\d\e\f\f\e\7\7\7\o\0\u\i\9\2\7\o\m\g\v\8\t\6\w\y\7\o\o\c\c\u\5\w\3\0\w\l\l\a\c\j\d\7\r\w\x\5\q\1\j\t\m\l\v\i\w\f\a\2\n\q\u\2\b\o\3\g\1\y\5\6\y\c\x\k\n\m\4\m\4\6\v\2\l\x\z\c\v\0\l\f\b\i\k\m\m\e\h\c\p\y\g\y\8\g\b\l\w\0\y\w\f\e\z\c\d\7\h\8\q\o\f\u ]] 00:08:12.477 18:17:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.477 18:17:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:12.736 [2024-11-17 18:17:10.751191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.736 [2024-11-17 18:17:10.751371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:08:12.736 [2024-11-17 18:17:10.891417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.736 [2024-11-17 18:17:10.923980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.736  [2024-11-17T18:17:11.277Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.010 00:08:13.011 18:17:11 -- dd/posix.sh@93 -- # [[ 7ipo2vwxp3q7g8yl18qz8khetx6q7xfg5fes2d9fqyvl258ucgz0nj9muj3znjrtjcw4hb3765try4acnp6vf4swl2eoew89eqhhgv5puhmo3djxfh8t642i9jm34qr56eb42w2g22p7lnxitoy7pf6ih2qn4xrsi2j7ssevi3pmh5vk2vw891budapwkaxg3zmrexzg2bs4k3g6i2jbuozfy8a7120hbrjwwl6w26cq0hli4qt848ew0xcxb7u3dckf2nzdesmxj1ajuewadejkubrgzs4bq10ru36yvql6qweyorrg8lyao1cvom78s5u9wueap8xdbnyg6wasj55pma9w3dde5xbwmd033ulavfjhoif1b2qp6njumi64ykf8mpdeffe777o0ui927omgv8t6wy7ooccu5w30wllacjd7rwx5q1jtmlviwfa2nqu2bo3g1y56ycxknm4m46v2lxzcv0lfbikmmehcpygy8gblw0ywfezcd7h8qofu == \7\i\p\o\2\v\w\x\p\3\q\7\g\8\y\l\1\8\q\z\8\k\h\e\t\x\6\q\7\x\f\g\5\f\e\s\2\d\9\f\q\y\v\l\2\5\8\u\c\g\z\0\n\j\9\m\u\j\3\z\n\j\r\t\j\c\w\4\h\b\3\7\6\5\t\r\y\4\a\c\n\p\6\v\f\4\s\w\l\2\e\o\e\w\8\9\e\q\h\h\g\v\5\p\u\h\m\o\3\d\j\x\f\h\8\t\6\4\2\i\9\j\m\3\4\q\r\5\6\e\b\4\2\w\2\g\2\2\p\7\l\n\x\i\t\o\y\7\p\f\6\i\h\2\q\n\4\x\r\s\i\2\j\7\s\s\e\v\i\3\p\m\h\5\v\k\2\v\w\8\9\1\b\u\d\a\p\w\k\a\x\g\3\z\m\r\e\x\z\g\2\b\s\4\k\3\g\6\i\2\j\b\u\o\z\f\y\8\a\7\1\2\0\h\b\r\j\w\w\l\6\w\2\6\c\q\0\h\l\i\4\q\t\8\4\8\e\w\0\x\c\x\b\7\u\3\d\c\k\f\2\n\z\d\e\s\m\x\j\1\a\j\u\e\w\a\d\e\j\k\u\b\r\g\z\s\4\b\q\1\0\r\u\3\6\y\v\q\l\6\q\w\e\y\o\r\r\g\8\l\y\a\o\1\c\v\o\m\7\8\s\5\u\9\w\u\e\a\p\8\x\d\b\n\y\g\6\w\a\s\j\5\5\p\m\a\9\w\3\d\d\e\5\x\b\w\m\d\0\3\3\u\l\a\v\f\j\h\o\i\f\1\b\2\q\p\6\n\j\u\m\i\6\4\y\k\f\8\m\p\d\e\f\f\e\7\7\7\o\0\u\i\9\2\7\o\m\g\v\8\t\6\w\y\7\o\o\c\c\u\5\w\3\0\w\l\l\a\c\j\d\7\r\w\x\5\q\1\j\t\m\l\v\i\w\f\a\2\n\q\u\2\b\o\3\g\1\y\5\6\y\c\x\k\n\m\4\m\4\6\v\2\l\x\z\c\v\0\l\f\b\i\k\m\m\e\h\c\p\y\g\y\8\g\b\l\w\0\y\w\f\e\z\c\d\7\h\8\q\o\f\u ]] 00:08:13.011 00:08:13.011 real 0m3.486s 00:08:13.011 user 0m1.718s 00:08:13.011 sys 0m0.766s 00:08:13.011 ************************************ 00:08:13.011 END TEST dd_flags_misc 00:08:13.011 ************************************ 00:08:13.011 18:17:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.011 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.011 18:17:11 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:13.011 18:17:11 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:13.011 * Second test run, disabling liburing, forcing AIO 00:08:13.011 18:17:11 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:13.011 18:17:11 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:13.011 18:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.011 18:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.011 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.011 ************************************ 00:08:13.011 START TEST dd_flag_append_forced_aio 00:08:13.011 ************************************ 00:08:13.011 18:17:11 -- common/autotest_common.sh@1114 -- # append 00:08:13.011 18:17:11 -- dd/posix.sh@16 -- # local dump0 00:08:13.011 18:17:11 -- dd/posix.sh@17 -- # local dump1 00:08:13.011 18:17:11 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:13.011 18:17:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:13.011 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.011 18:17:11 -- dd/posix.sh@19 -- # dump0=letihsyfppvhxcje15jeefcdf7b7s9o9 00:08:13.011 18:17:11 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:13.011 18:17:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:13.011 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.011 18:17:11 -- dd/posix.sh@20 -- # dump1=rwcw1e5j4qz7e3177np8ew3kedga5sz1 00:08:13.011 18:17:11 -- dd/posix.sh@22 -- # printf %s letihsyfppvhxcje15jeefcdf7b7s9o9 00:08:13.011 18:17:11 -- dd/posix.sh@23 -- # printf %s rwcw1e5j4qz7e3177np8ew3kedga5sz1 00:08:13.011 18:17:11 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:13.011 [2024-11-17 18:17:11.234005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.011 [2024-11-17 18:17:11.234143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70046 ] 00:08:13.270 [2024-11-17 18:17:11.370686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.270 [2024-11-17 18:17:11.404140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.270  [2024-11-17T18:17:11.795Z] Copying: 32/32 [B] (average 31 kBps) 00:08:13.528 00:08:13.528 18:17:11 -- dd/posix.sh@27 -- # [[ rwcw1e5j4qz7e3177np8ew3kedga5sz1letihsyfppvhxcje15jeefcdf7b7s9o9 == \r\w\c\w\1\e\5\j\4\q\z\7\e\3\1\7\7\n\p\8\e\w\3\k\e\d\g\a\5\s\z\1\l\e\t\i\h\s\y\f\p\p\v\h\x\c\j\e\1\5\j\e\e\f\c\d\f\7\b\7\s\9\o\9 ]] 00:08:13.528 00:08:13.528 real 0m0.411s 00:08:13.528 user 0m0.186s 00:08:13.528 sys 0m0.103s 00:08:13.528 18:17:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.529 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.529 ************************************ 00:08:13.529 END TEST dd_flag_append_forced_aio 00:08:13.529 ************************************ 00:08:13.529 18:17:11 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:13.529 18:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.529 18:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.529 18:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:13.529 ************************************ 00:08:13.529 START TEST dd_flag_directory_forced_aio 00:08:13.529 ************************************ 00:08:13.529 18:17:11 -- common/autotest_common.sh@1114 -- # directory 00:08:13.529 18:17:11 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.529 18:17:11 -- common/autotest_common.sh@650 -- # local es=0 00:08:13.529 18:17:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.529 18:17:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.529 18:17:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.529 18:17:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.529 18:17:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.529 18:17:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.529 18:17:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.529 18:17:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.529 18:17:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.529 18:17:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.529 [2024-11-17 18:17:11.692575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.529 [2024-11-17 18:17:11.692682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70067 ] 00:08:13.787 [2024-11-17 18:17:11.830119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.787 [2024-11-17 18:17:11.863185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.787 [2024-11-17 18:17:11.907810] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.787 [2024-11-17 18:17:11.907870] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.787 [2024-11-17 18:17:11.907888] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.787 [2024-11-17 18:17:11.970413] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:13.787 18:17:12 -- common/autotest_common.sh@653 -- # es=236 00:08:13.787 18:17:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.787 18:17:12 -- common/autotest_common.sh@662 -- # es=108 00:08:13.787 18:17:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:13.787 18:17:12 -- common/autotest_common.sh@670 -- # es=1 00:08:13.787 18:17:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.787 18:17:12 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.787 18:17:12 -- common/autotest_common.sh@650 -- # local es=0 00:08:13.787 18:17:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.787 18:17:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.787 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.787 18:17:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.787 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.787 18:17:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.787 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.787 18:17:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.787 18:17:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.787 18:17:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:14.045 [2024-11-17 18:17:12.089104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.045 [2024-11-17 18:17:12.089202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:08:14.045 [2024-11-17 18:17:12.231781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.045 [2024-11-17 18:17:12.267204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.045 [2024-11-17 18:17:12.309860] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:14.045 [2024-11-17 18:17:12.309924] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:14.045 [2024-11-17 18:17:12.309962] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.303 [2024-11-17 18:17:12.368034] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:14.303 18:17:12 -- common/autotest_common.sh@653 -- # es=236 00:08:14.303 18:17:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.303 18:17:12 -- common/autotest_common.sh@662 -- # es=108 00:08:14.303 18:17:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:14.303 18:17:12 -- common/autotest_common.sh@670 -- # es=1 00:08:14.303 18:17:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.303 00:08:14.303 real 0m0.791s 00:08:14.303 user 0m0.402s 00:08:14.303 sys 0m0.180s 00:08:14.303 18:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.303 ************************************ 00:08:14.303 END TEST dd_flag_directory_forced_aio 00:08:14.303 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.303 ************************************ 00:08:14.303 18:17:12 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:14.303 18:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.303 18:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.303 18:17:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.303 ************************************ 00:08:14.303 START TEST dd_flag_nofollow_forced_aio 00:08:14.303 ************************************ 00:08:14.303 18:17:12 -- common/autotest_common.sh@1114 -- # nofollow 00:08:14.303 18:17:12 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:14.303 18:17:12 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:14.303 18:17:12 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:14.303 18:17:12 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:14.303 18:17:12 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.303 18:17:12 -- common/autotest_common.sh@650 -- # local es=0 00:08:14.303 18:17:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.303 18:17:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.303 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.303 18:17:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.303 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.303 18:17:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.303 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.303 18:17:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.303 18:17:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.303 18:17:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.303 [2024-11-17 18:17:12.545106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.303 [2024-11-17 18:17:12.545204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70105 ] 00:08:14.561 [2024-11-17 18:17:12.685264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.561 [2024-11-17 18:17:12.727688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.561 [2024-11-17 18:17:12.780548] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:14.561 [2024-11-17 18:17:12.780614] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:14.561 [2024-11-17 18:17:12.780632] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.818 [2024-11-17 18:17:12.844723] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:14.818 18:17:12 -- common/autotest_common.sh@653 -- # es=216 00:08:14.818 18:17:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.818 18:17:12 -- common/autotest_common.sh@662 -- # es=88 00:08:14.818 18:17:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:14.818 18:17:12 -- common/autotest_common.sh@670 -- # es=1 00:08:14.818 18:17:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.818 18:17:12 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.818 18:17:12 -- common/autotest_common.sh@650 -- # local es=0 00:08:14.818 18:17:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.818 18:17:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.818 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.818 18:17:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.818 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.818 18:17:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.818 18:17:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.818 18:17:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.818 18:17:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.818 18:17:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.818 [2024-11-17 18:17:12.969013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.818 [2024-11-17 18:17:12.969110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70115 ] 00:08:15.077 [2024-11-17 18:17:13.107401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.077 [2024-11-17 18:17:13.154186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.077 [2024-11-17 18:17:13.213845] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:15.077 [2024-11-17 18:17:13.213907] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:15.077 [2024-11-17 18:17:13.213939] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.077 [2024-11-17 18:17:13.276631] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:15.077 18:17:13 -- common/autotest_common.sh@653 -- # es=216 00:08:15.077 18:17:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.077 18:17:13 -- common/autotest_common.sh@662 -- # es=88 00:08:15.077 18:17:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:15.077 18:17:13 -- common/autotest_common.sh@670 -- # es=1 00:08:15.077 18:17:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.077 18:17:13 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:15.077 18:17:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.077 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.335 18:17:13 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.336 [2024-11-17 18:17:13.393480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:15.336 [2024-11-17 18:17:13.393584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70122 ] 00:08:15.336 [2024-11-17 18:17:13.531767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.336 [2024-11-17 18:17:13.575035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.594  [2024-11-17T18:17:13.861Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.594 00:08:15.594 ************************************ 00:08:15.594 END TEST dd_flag_nofollow_forced_aio 00:08:15.594 ************************************ 00:08:15.594 18:17:13 -- dd/posix.sh@49 -- # [[ mddmzucc40yf5h6vu7s39o0l7fbqgrehvrvxp8phsj4tcv58w393xca1njucnkpze4k06ztqvja1g8z8rbyso7tcjs5vhz51cu6qjy7nnu6nqj0rmrwdgrtwe6smea0hei5lhwzzpbi6sscugzbr9jpt3olk9a7uufpepjff0uyf2m61tvcujzvslqv9eyfm7po7cd1vuc2x60ztoz9d5xiqy88z3264lnf07az7b0d2r9c214tf5jywf39z4bf8uwkrwebaw4to0lc2k6ee5uggu3ichugmv3zgf3a4kfxa7j630qlt9sfqo8zpfcx6o8qzdlrfhsxuker7a60wfvlwvq4ih992rw4zku6iun6gtwc1uqg3lefs2pqg8vq3832o8mm4rdbpxwrwa134i2x5nykuynuv0pfxih04qvp1nqkecbh68t2b18t5qbdn3ljia1s0vmjrhy8vhhtvvfyme22fhy8ptrjsg51j18j4krfpbqcbo4dbjm5vxl80 == \m\d\d\m\z\u\c\c\4\0\y\f\5\h\6\v\u\7\s\3\9\o\0\l\7\f\b\q\g\r\e\h\v\r\v\x\p\8\p\h\s\j\4\t\c\v\5\8\w\3\9\3\x\c\a\1\n\j\u\c\n\k\p\z\e\4\k\0\6\z\t\q\v\j\a\1\g\8\z\8\r\b\y\s\o\7\t\c\j\s\5\v\h\z\5\1\c\u\6\q\j\y\7\n\n\u\6\n\q\j\0\r\m\r\w\d\g\r\t\w\e\6\s\m\e\a\0\h\e\i\5\l\h\w\z\z\p\b\i\6\s\s\c\u\g\z\b\r\9\j\p\t\3\o\l\k\9\a\7\u\u\f\p\e\p\j\f\f\0\u\y\f\2\m\6\1\t\v\c\u\j\z\v\s\l\q\v\9\e\y\f\m\7\p\o\7\c\d\1\v\u\c\2\x\6\0\z\t\o\z\9\d\5\x\i\q\y\8\8\z\3\2\6\4\l\n\f\0\7\a\z\7\b\0\d\2\r\9\c\2\1\4\t\f\5\j\y\w\f\3\9\z\4\b\f\8\u\w\k\r\w\e\b\a\w\4\t\o\0\l\c\2\k\6\e\e\5\u\g\g\u\3\i\c\h\u\g\m\v\3\z\g\f\3\a\4\k\f\x\a\7\j\6\3\0\q\l\t\9\s\f\q\o\8\z\p\f\c\x\6\o\8\q\z\d\l\r\f\h\s\x\u\k\e\r\7\a\6\0\w\f\v\l\w\v\q\4\i\h\9\9\2\r\w\4\z\k\u\6\i\u\n\6\g\t\w\c\1\u\q\g\3\l\e\f\s\2\p\q\g\8\v\q\3\8\3\2\o\8\m\m\4\r\d\b\p\x\w\r\w\a\1\3\4\i\2\x\5\n\y\k\u\y\n\u\v\0\p\f\x\i\h\0\4\q\v\p\1\n\q\k\e\c\b\h\6\8\t\2\b\1\8\t\5\q\b\d\n\3\l\j\i\a\1\s\0\v\m\j\r\h\y\8\v\h\h\t\v\v\f\y\m\e\2\2\f\h\y\8\p\t\r\j\s\g\5\1\j\1\8\j\4\k\r\f\p\b\q\c\b\o\4\d\b\j\m\5\v\x\l\8\0 ]] 00:08:15.594 00:08:15.594 real 0m1.290s 00:08:15.594 user 0m0.614s 00:08:15.594 sys 0m0.345s 00:08:15.594 18:17:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.594 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.594 18:17:13 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:15.594 18:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.594 18:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.594 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.594 ************************************ 00:08:15.594 START TEST dd_flag_noatime_forced_aio 00:08:15.594 ************************************ 00:08:15.594 18:17:13 -- common/autotest_common.sh@1114 -- # noatime 00:08:15.594 18:17:13 -- dd/posix.sh@53 -- # local atime_if 00:08:15.594 18:17:13 -- dd/posix.sh@54 -- # local atime_of 00:08:15.595 18:17:13 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:15.595 18:17:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.595 18:17:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.595 18:17:13 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.595 18:17:13 -- dd/posix.sh@60 -- # atime_if=1731867433 00:08:15.595 18:17:13 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.595 18:17:13 -- dd/posix.sh@61 -- # atime_of=1731867433 00:08:15.595 18:17:13 -- dd/posix.sh@66 -- # sleep 1 00:08:16.971 18:17:14 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.971 [2024-11-17 18:17:14.903473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.971 [2024-11-17 18:17:14.903577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70163 ] 00:08:16.971 [2024-11-17 18:17:15.042855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.971 [2024-11-17 18:17:15.089573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.971  [2024-11-17T18:17:15.498Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.231 00:08:17.231 18:17:15 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.231 18:17:15 -- dd/posix.sh@69 -- # (( atime_if == 1731867433 )) 00:08:17.231 18:17:15 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.231 18:17:15 -- dd/posix.sh@70 -- # (( atime_of == 1731867433 )) 00:08:17.231 18:17:15 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.231 [2024-11-17 18:17:15.350090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.231 [2024-11-17 18:17:15.350179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70173 ] 00:08:17.231 [2024-11-17 18:17:15.486014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.490 [2024-11-17 18:17:15.519736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.490  [2024-11-17T18:17:15.757Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.490 00:08:17.490 18:17:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.490 18:17:15 -- dd/posix.sh@73 -- # (( atime_if < 1731867435 )) 00:08:17.490 00:08:17.490 real 0m1.868s 00:08:17.490 user 0m0.405s 00:08:17.490 sys 0m0.224s 00:08:17.490 18:17:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.490 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:17.490 ************************************ 00:08:17.490 END TEST dd_flag_noatime_forced_aio 00:08:17.490 ************************************ 00:08:17.490 18:17:15 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:17.490 18:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.490 18:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.490 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:17.490 ************************************ 00:08:17.490 START TEST dd_flags_misc_forced_aio 00:08:17.490 ************************************ 00:08:17.490 18:17:15 -- common/autotest_common.sh@1114 -- # io 00:08:17.490 18:17:15 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:17.490 18:17:15 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:17.490 18:17:15 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:17.490 18:17:15 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:17.490 18:17:15 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:17.490 18:17:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:17.490 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:17.749 18:17:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.749 18:17:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:17.749 [2024-11-17 18:17:15.802677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.749 [2024-11-17 18:17:15.802760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70195 ] 00:08:17.749 [2024-11-17 18:17:15.931375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.749 [2024-11-17 18:17:15.965381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.749  [2024-11-17T18:17:16.275Z] Copying: 512/512 [B] (average 500 kBps) 00:08:18.008 00:08:18.008 18:17:16 -- dd/posix.sh@93 -- # [[ rznpx0r4cg0lre8kvlgvaz5lth5ndx15j19lfvu5707kuhe05eogcw4pf540h7giz6disuhz38r0chz2cf6qs0gtp8yhnq8jd4wlc5hv9i7xuwnxn0qbry4vcwfo6v782g4bhbzcyig7iz4g9bk3s2mb6zlzvhxqpze0e3a50s8hasi45ngdr5vsdwaxyl98b0ooydqtb3jp8miockd78f49r1dlohtsrbpew2dl3qkliawwvj4keq0cst4qe902rrxwe69nnv71ugppnm7dgz2lbr1g89d23wyawmb3ys46w4ioghl5qwqukljr09jec3brk2ds8bkxz5zwqf8r2018tso5sjn2rreab9i8yxkpxwqzysyiuayiaz7b2sz8jqe4xqllczyor9k6a2dko5uj1h5zh2xs65m10wm4ol55wza9tqxvup1tpifbrx8hwzlbfsnlz2wiscmawleqvp3p8g49whkly4l5otzs9u35zj96h15i0t6wq3br2f7a == \r\z\n\p\x\0\r\4\c\g\0\l\r\e\8\k\v\l\g\v\a\z\5\l\t\h\5\n\d\x\1\5\j\1\9\l\f\v\u\5\7\0\7\k\u\h\e\0\5\e\o\g\c\w\4\p\f\5\4\0\h\7\g\i\z\6\d\i\s\u\h\z\3\8\r\0\c\h\z\2\c\f\6\q\s\0\g\t\p\8\y\h\n\q\8\j\d\4\w\l\c\5\h\v\9\i\7\x\u\w\n\x\n\0\q\b\r\y\4\v\c\w\f\o\6\v\7\8\2\g\4\b\h\b\z\c\y\i\g\7\i\z\4\g\9\b\k\3\s\2\m\b\6\z\l\z\v\h\x\q\p\z\e\0\e\3\a\5\0\s\8\h\a\s\i\4\5\n\g\d\r\5\v\s\d\w\a\x\y\l\9\8\b\0\o\o\y\d\q\t\b\3\j\p\8\m\i\o\c\k\d\7\8\f\4\9\r\1\d\l\o\h\t\s\r\b\p\e\w\2\d\l\3\q\k\l\i\a\w\w\v\j\4\k\e\q\0\c\s\t\4\q\e\9\0\2\r\r\x\w\e\6\9\n\n\v\7\1\u\g\p\p\n\m\7\d\g\z\2\l\b\r\1\g\8\9\d\2\3\w\y\a\w\m\b\3\y\s\4\6\w\4\i\o\g\h\l\5\q\w\q\u\k\l\j\r\0\9\j\e\c\3\b\r\k\2\d\s\8\b\k\x\z\5\z\w\q\f\8\r\2\0\1\8\t\s\o\5\s\j\n\2\r\r\e\a\b\9\i\8\y\x\k\p\x\w\q\z\y\s\y\i\u\a\y\i\a\z\7\b\2\s\z\8\j\q\e\4\x\q\l\l\c\z\y\o\r\9\k\6\a\2\d\k\o\5\u\j\1\h\5\z\h\2\x\s\6\5\m\1\0\w\m\4\o\l\5\5\w\z\a\9\t\q\x\v\u\p\1\t\p\i\f\b\r\x\8\h\w\z\l\b\f\s\n\l\z\2\w\i\s\c\m\a\w\l\e\q\v\p\3\p\8\g\4\9\w\h\k\l\y\4\l\5\o\t\z\s\9\u\3\5\z\j\9\6\h\1\5\i\0\t\6\w\q\3\b\r\2\f\7\a ]] 00:08:18.008 18:17:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.008 18:17:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:18.008 [2024-11-17 18:17:16.201422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.008 [2024-11-17 18:17:16.201526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70203 ] 00:08:18.267 [2024-11-17 18:17:16.336501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.267 [2024-11-17 18:17:16.386963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.267  [2024-11-17T18:17:16.793Z] Copying: 512/512 [B] (average 500 kBps) 00:08:18.526 00:08:18.527 18:17:16 -- dd/posix.sh@93 -- # [[ rznpx0r4cg0lre8kvlgvaz5lth5ndx15j19lfvu5707kuhe05eogcw4pf540h7giz6disuhz38r0chz2cf6qs0gtp8yhnq8jd4wlc5hv9i7xuwnxn0qbry4vcwfo6v782g4bhbzcyig7iz4g9bk3s2mb6zlzvhxqpze0e3a50s8hasi45ngdr5vsdwaxyl98b0ooydqtb3jp8miockd78f49r1dlohtsrbpew2dl3qkliawwvj4keq0cst4qe902rrxwe69nnv71ugppnm7dgz2lbr1g89d23wyawmb3ys46w4ioghl5qwqukljr09jec3brk2ds8bkxz5zwqf8r2018tso5sjn2rreab9i8yxkpxwqzysyiuayiaz7b2sz8jqe4xqllczyor9k6a2dko5uj1h5zh2xs65m10wm4ol55wza9tqxvup1tpifbrx8hwzlbfsnlz2wiscmawleqvp3p8g49whkly4l5otzs9u35zj96h15i0t6wq3br2f7a == \r\z\n\p\x\0\r\4\c\g\0\l\r\e\8\k\v\l\g\v\a\z\5\l\t\h\5\n\d\x\1\5\j\1\9\l\f\v\u\5\7\0\7\k\u\h\e\0\5\e\o\g\c\w\4\p\f\5\4\0\h\7\g\i\z\6\d\i\s\u\h\z\3\8\r\0\c\h\z\2\c\f\6\q\s\0\g\t\p\8\y\h\n\q\8\j\d\4\w\l\c\5\h\v\9\i\7\x\u\w\n\x\n\0\q\b\r\y\4\v\c\w\f\o\6\v\7\8\2\g\4\b\h\b\z\c\y\i\g\7\i\z\4\g\9\b\k\3\s\2\m\b\6\z\l\z\v\h\x\q\p\z\e\0\e\3\a\5\0\s\8\h\a\s\i\4\5\n\g\d\r\5\v\s\d\w\a\x\y\l\9\8\b\0\o\o\y\d\q\t\b\3\j\p\8\m\i\o\c\k\d\7\8\f\4\9\r\1\d\l\o\h\t\s\r\b\p\e\w\2\d\l\3\q\k\l\i\a\w\w\v\j\4\k\e\q\0\c\s\t\4\q\e\9\0\2\r\r\x\w\e\6\9\n\n\v\7\1\u\g\p\p\n\m\7\d\g\z\2\l\b\r\1\g\8\9\d\2\3\w\y\a\w\m\b\3\y\s\4\6\w\4\i\o\g\h\l\5\q\w\q\u\k\l\j\r\0\9\j\e\c\3\b\r\k\2\d\s\8\b\k\x\z\5\z\w\q\f\8\r\2\0\1\8\t\s\o\5\s\j\n\2\r\r\e\a\b\9\i\8\y\x\k\p\x\w\q\z\y\s\y\i\u\a\y\i\a\z\7\b\2\s\z\8\j\q\e\4\x\q\l\l\c\z\y\o\r\9\k\6\a\2\d\k\o\5\u\j\1\h\5\z\h\2\x\s\6\5\m\1\0\w\m\4\o\l\5\5\w\z\a\9\t\q\x\v\u\p\1\t\p\i\f\b\r\x\8\h\w\z\l\b\f\s\n\l\z\2\w\i\s\c\m\a\w\l\e\q\v\p\3\p\8\g\4\9\w\h\k\l\y\4\l\5\o\t\z\s\9\u\3\5\z\j\9\6\h\1\5\i\0\t\6\w\q\3\b\r\2\f\7\a ]] 00:08:18.527 18:17:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.527 18:17:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:18.527 [2024-11-17 18:17:16.630224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.527 [2024-11-17 18:17:16.630377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70210 ] 00:08:18.527 [2024-11-17 18:17:16.759892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.786 [2024-11-17 18:17:16.797570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.786  [2024-11-17T18:17:17.053Z] Copying: 512/512 [B] (average 250 kBps) 00:08:18.786 00:08:18.786 18:17:17 -- dd/posix.sh@93 -- # [[ rznpx0r4cg0lre8kvlgvaz5lth5ndx15j19lfvu5707kuhe05eogcw4pf540h7giz6disuhz38r0chz2cf6qs0gtp8yhnq8jd4wlc5hv9i7xuwnxn0qbry4vcwfo6v782g4bhbzcyig7iz4g9bk3s2mb6zlzvhxqpze0e3a50s8hasi45ngdr5vsdwaxyl98b0ooydqtb3jp8miockd78f49r1dlohtsrbpew2dl3qkliawwvj4keq0cst4qe902rrxwe69nnv71ugppnm7dgz2lbr1g89d23wyawmb3ys46w4ioghl5qwqukljr09jec3brk2ds8bkxz5zwqf8r2018tso5sjn2rreab9i8yxkpxwqzysyiuayiaz7b2sz8jqe4xqllczyor9k6a2dko5uj1h5zh2xs65m10wm4ol55wza9tqxvup1tpifbrx8hwzlbfsnlz2wiscmawleqvp3p8g49whkly4l5otzs9u35zj96h15i0t6wq3br2f7a == \r\z\n\p\x\0\r\4\c\g\0\l\r\e\8\k\v\l\g\v\a\z\5\l\t\h\5\n\d\x\1\5\j\1\9\l\f\v\u\5\7\0\7\k\u\h\e\0\5\e\o\g\c\w\4\p\f\5\4\0\h\7\g\i\z\6\d\i\s\u\h\z\3\8\r\0\c\h\z\2\c\f\6\q\s\0\g\t\p\8\y\h\n\q\8\j\d\4\w\l\c\5\h\v\9\i\7\x\u\w\n\x\n\0\q\b\r\y\4\v\c\w\f\o\6\v\7\8\2\g\4\b\h\b\z\c\y\i\g\7\i\z\4\g\9\b\k\3\s\2\m\b\6\z\l\z\v\h\x\q\p\z\e\0\e\3\a\5\0\s\8\h\a\s\i\4\5\n\g\d\r\5\v\s\d\w\a\x\y\l\9\8\b\0\o\o\y\d\q\t\b\3\j\p\8\m\i\o\c\k\d\7\8\f\4\9\r\1\d\l\o\h\t\s\r\b\p\e\w\2\d\l\3\q\k\l\i\a\w\w\v\j\4\k\e\q\0\c\s\t\4\q\e\9\0\2\r\r\x\w\e\6\9\n\n\v\7\1\u\g\p\p\n\m\7\d\g\z\2\l\b\r\1\g\8\9\d\2\3\w\y\a\w\m\b\3\y\s\4\6\w\4\i\o\g\h\l\5\q\w\q\u\k\l\j\r\0\9\j\e\c\3\b\r\k\2\d\s\8\b\k\x\z\5\z\w\q\f\8\r\2\0\1\8\t\s\o\5\s\j\n\2\r\r\e\a\b\9\i\8\y\x\k\p\x\w\q\z\y\s\y\i\u\a\y\i\a\z\7\b\2\s\z\8\j\q\e\4\x\q\l\l\c\z\y\o\r\9\k\6\a\2\d\k\o\5\u\j\1\h\5\z\h\2\x\s\6\5\m\1\0\w\m\4\o\l\5\5\w\z\a\9\t\q\x\v\u\p\1\t\p\i\f\b\r\x\8\h\w\z\l\b\f\s\n\l\z\2\w\i\s\c\m\a\w\l\e\q\v\p\3\p\8\g\4\9\w\h\k\l\y\4\l\5\o\t\z\s\9\u\3\5\z\j\9\6\h\1\5\i\0\t\6\w\q\3\b\r\2\f\7\a ]] 00:08:18.786 18:17:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.786 18:17:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:18.786 [2024-11-17 18:17:17.050479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.786 [2024-11-17 18:17:17.050599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70218 ] 00:08:19.045 [2024-11-17 18:17:17.186772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.045 [2024-11-17 18:17:17.226370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.045  [2024-11-17T18:17:17.571Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.304 00:08:19.305 18:17:17 -- dd/posix.sh@93 -- # [[ rznpx0r4cg0lre8kvlgvaz5lth5ndx15j19lfvu5707kuhe05eogcw4pf540h7giz6disuhz38r0chz2cf6qs0gtp8yhnq8jd4wlc5hv9i7xuwnxn0qbry4vcwfo6v782g4bhbzcyig7iz4g9bk3s2mb6zlzvhxqpze0e3a50s8hasi45ngdr5vsdwaxyl98b0ooydqtb3jp8miockd78f49r1dlohtsrbpew2dl3qkliawwvj4keq0cst4qe902rrxwe69nnv71ugppnm7dgz2lbr1g89d23wyawmb3ys46w4ioghl5qwqukljr09jec3brk2ds8bkxz5zwqf8r2018tso5sjn2rreab9i8yxkpxwqzysyiuayiaz7b2sz8jqe4xqllczyor9k6a2dko5uj1h5zh2xs65m10wm4ol55wza9tqxvup1tpifbrx8hwzlbfsnlz2wiscmawleqvp3p8g49whkly4l5otzs9u35zj96h15i0t6wq3br2f7a == \r\z\n\p\x\0\r\4\c\g\0\l\r\e\8\k\v\l\g\v\a\z\5\l\t\h\5\n\d\x\1\5\j\1\9\l\f\v\u\5\7\0\7\k\u\h\e\0\5\e\o\g\c\w\4\p\f\5\4\0\h\7\g\i\z\6\d\i\s\u\h\z\3\8\r\0\c\h\z\2\c\f\6\q\s\0\g\t\p\8\y\h\n\q\8\j\d\4\w\l\c\5\h\v\9\i\7\x\u\w\n\x\n\0\q\b\r\y\4\v\c\w\f\o\6\v\7\8\2\g\4\b\h\b\z\c\y\i\g\7\i\z\4\g\9\b\k\3\s\2\m\b\6\z\l\z\v\h\x\q\p\z\e\0\e\3\a\5\0\s\8\h\a\s\i\4\5\n\g\d\r\5\v\s\d\w\a\x\y\l\9\8\b\0\o\o\y\d\q\t\b\3\j\p\8\m\i\o\c\k\d\7\8\f\4\9\r\1\d\l\o\h\t\s\r\b\p\e\w\2\d\l\3\q\k\l\i\a\w\w\v\j\4\k\e\q\0\c\s\t\4\q\e\9\0\2\r\r\x\w\e\6\9\n\n\v\7\1\u\g\p\p\n\m\7\d\g\z\2\l\b\r\1\g\8\9\d\2\3\w\y\a\w\m\b\3\y\s\4\6\w\4\i\o\g\h\l\5\q\w\q\u\k\l\j\r\0\9\j\e\c\3\b\r\k\2\d\s\8\b\k\x\z\5\z\w\q\f\8\r\2\0\1\8\t\s\o\5\s\j\n\2\r\r\e\a\b\9\i\8\y\x\k\p\x\w\q\z\y\s\y\i\u\a\y\i\a\z\7\b\2\s\z\8\j\q\e\4\x\q\l\l\c\z\y\o\r\9\k\6\a\2\d\k\o\5\u\j\1\h\5\z\h\2\x\s\6\5\m\1\0\w\m\4\o\l\5\5\w\z\a\9\t\q\x\v\u\p\1\t\p\i\f\b\r\x\8\h\w\z\l\b\f\s\n\l\z\2\w\i\s\c\m\a\w\l\e\q\v\p\3\p\8\g\4\9\w\h\k\l\y\4\l\5\o\t\z\s\9\u\3\5\z\j\9\6\h\1\5\i\0\t\6\w\q\3\b\r\2\f\7\a ]] 00:08:19.305 18:17:17 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:19.305 18:17:17 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:19.305 18:17:17 -- dd/common.sh@98 -- # xtrace_disable 00:08:19.305 18:17:17 -- common/autotest_common.sh@10 -- # set +x 00:08:19.305 18:17:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.305 18:17:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:19.305 [2024-11-17 18:17:17.487054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.305 [2024-11-17 18:17:17.487157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70225 ] 00:08:19.564 [2024-11-17 18:17:17.623113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.564 [2024-11-17 18:17:17.663602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.564  [2024-11-17T18:17:18.090Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.823 00:08:19.823 18:17:17 -- dd/posix.sh@93 -- # [[ 5yypd59hlx3kcgijcp3ksds61vzdlp104j5czsa4cyyi325mt0w9rln3z55h3jep4ii7grlijb44nqit84hgpsvag0uhggcyihmmapdmvnktu3tuud2u0xjwu0zwjc48tdowq2fi042p00r55ng9p5skuqbroq39abnaj33cwni1psbx39old40d56uaynrrdclfnno437t8c9jwrzmi7otv3c94govtihl8mom0c88z9dzzjkyawe4or6iwu34adfzqkm6xwl237o635fcx172j78x9pusyncltidt09q8h1kt1wx3c5x2e4ns6lrujsinr665pcsub097fpm5it2gqna8t35lv0yvl8yealqazppk3nz300h7ddodytctco91vpqrus1r1ay31669tt2s7cp1n96fpayq9s9mr09tyy7vvoj495p56g9rpp2r79ywou9rs4tgqqixn3gzvg89hzjk57f85pbdys1gx9ggaprcd7lc5xqxetyujqpv9 == \5\y\y\p\d\5\9\h\l\x\3\k\c\g\i\j\c\p\3\k\s\d\s\6\1\v\z\d\l\p\1\0\4\j\5\c\z\s\a\4\c\y\y\i\3\2\5\m\t\0\w\9\r\l\n\3\z\5\5\h\3\j\e\p\4\i\i\7\g\r\l\i\j\b\4\4\n\q\i\t\8\4\h\g\p\s\v\a\g\0\u\h\g\g\c\y\i\h\m\m\a\p\d\m\v\n\k\t\u\3\t\u\u\d\2\u\0\x\j\w\u\0\z\w\j\c\4\8\t\d\o\w\q\2\f\i\0\4\2\p\0\0\r\5\5\n\g\9\p\5\s\k\u\q\b\r\o\q\3\9\a\b\n\a\j\3\3\c\w\n\i\1\p\s\b\x\3\9\o\l\d\4\0\d\5\6\u\a\y\n\r\r\d\c\l\f\n\n\o\4\3\7\t\8\c\9\j\w\r\z\m\i\7\o\t\v\3\c\9\4\g\o\v\t\i\h\l\8\m\o\m\0\c\8\8\z\9\d\z\z\j\k\y\a\w\e\4\o\r\6\i\w\u\3\4\a\d\f\z\q\k\m\6\x\w\l\2\3\7\o\6\3\5\f\c\x\1\7\2\j\7\8\x\9\p\u\s\y\n\c\l\t\i\d\t\0\9\q\8\h\1\k\t\1\w\x\3\c\5\x\2\e\4\n\s\6\l\r\u\j\s\i\n\r\6\6\5\p\c\s\u\b\0\9\7\f\p\m\5\i\t\2\g\q\n\a\8\t\3\5\l\v\0\y\v\l\8\y\e\a\l\q\a\z\p\p\k\3\n\z\3\0\0\h\7\d\d\o\d\y\t\c\t\c\o\9\1\v\p\q\r\u\s\1\r\1\a\y\3\1\6\6\9\t\t\2\s\7\c\p\1\n\9\6\f\p\a\y\q\9\s\9\m\r\0\9\t\y\y\7\v\v\o\j\4\9\5\p\5\6\g\9\r\p\p\2\r\7\9\y\w\o\u\9\r\s\4\t\g\q\q\i\x\n\3\g\z\v\g\8\9\h\z\j\k\5\7\f\8\5\p\b\d\y\s\1\g\x\9\g\g\a\p\r\c\d\7\l\c\5\x\q\x\e\t\y\u\j\q\p\v\9 ]] 00:08:19.823 18:17:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.823 18:17:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:19.823 [2024-11-17 18:17:17.913703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.823 [2024-11-17 18:17:17.913801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70227 ] 00:08:19.823 [2024-11-17 18:17:18.051603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.082 [2024-11-17 18:17:18.094985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.083  [2024-11-17T18:17:18.350Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.083 00:08:20.083 18:17:18 -- dd/posix.sh@93 -- # [[ 5yypd59hlx3kcgijcp3ksds61vzdlp104j5czsa4cyyi325mt0w9rln3z55h3jep4ii7grlijb44nqit84hgpsvag0uhggcyihmmapdmvnktu3tuud2u0xjwu0zwjc48tdowq2fi042p00r55ng9p5skuqbroq39abnaj33cwni1psbx39old40d56uaynrrdclfnno437t8c9jwrzmi7otv3c94govtihl8mom0c88z9dzzjkyawe4or6iwu34adfzqkm6xwl237o635fcx172j78x9pusyncltidt09q8h1kt1wx3c5x2e4ns6lrujsinr665pcsub097fpm5it2gqna8t35lv0yvl8yealqazppk3nz300h7ddodytctco91vpqrus1r1ay31669tt2s7cp1n96fpayq9s9mr09tyy7vvoj495p56g9rpp2r79ywou9rs4tgqqixn3gzvg89hzjk57f85pbdys1gx9ggaprcd7lc5xqxetyujqpv9 == \5\y\y\p\d\5\9\h\l\x\3\k\c\g\i\j\c\p\3\k\s\d\s\6\1\v\z\d\l\p\1\0\4\j\5\c\z\s\a\4\c\y\y\i\3\2\5\m\t\0\w\9\r\l\n\3\z\5\5\h\3\j\e\p\4\i\i\7\g\r\l\i\j\b\4\4\n\q\i\t\8\4\h\g\p\s\v\a\g\0\u\h\g\g\c\y\i\h\m\m\a\p\d\m\v\n\k\t\u\3\t\u\u\d\2\u\0\x\j\w\u\0\z\w\j\c\4\8\t\d\o\w\q\2\f\i\0\4\2\p\0\0\r\5\5\n\g\9\p\5\s\k\u\q\b\r\o\q\3\9\a\b\n\a\j\3\3\c\w\n\i\1\p\s\b\x\3\9\o\l\d\4\0\d\5\6\u\a\y\n\r\r\d\c\l\f\n\n\o\4\3\7\t\8\c\9\j\w\r\z\m\i\7\o\t\v\3\c\9\4\g\o\v\t\i\h\l\8\m\o\m\0\c\8\8\z\9\d\z\z\j\k\y\a\w\e\4\o\r\6\i\w\u\3\4\a\d\f\z\q\k\m\6\x\w\l\2\3\7\o\6\3\5\f\c\x\1\7\2\j\7\8\x\9\p\u\s\y\n\c\l\t\i\d\t\0\9\q\8\h\1\k\t\1\w\x\3\c\5\x\2\e\4\n\s\6\l\r\u\j\s\i\n\r\6\6\5\p\c\s\u\b\0\9\7\f\p\m\5\i\t\2\g\q\n\a\8\t\3\5\l\v\0\y\v\l\8\y\e\a\l\q\a\z\p\p\k\3\n\z\3\0\0\h\7\d\d\o\d\y\t\c\t\c\o\9\1\v\p\q\r\u\s\1\r\1\a\y\3\1\6\6\9\t\t\2\s\7\c\p\1\n\9\6\f\p\a\y\q\9\s\9\m\r\0\9\t\y\y\7\v\v\o\j\4\9\5\p\5\6\g\9\r\p\p\2\r\7\9\y\w\o\u\9\r\s\4\t\g\q\q\i\x\n\3\g\z\v\g\8\9\h\z\j\k\5\7\f\8\5\p\b\d\y\s\1\g\x\9\g\g\a\p\r\c\d\7\l\c\5\x\q\x\e\t\y\u\j\q\p\v\9 ]] 00:08:20.083 18:17:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.083 18:17:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:20.083 [2024-11-17 18:17:18.331682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.083 [2024-11-17 18:17:18.331781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:08:20.342 [2024-11-17 18:17:18.469522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.342 [2024-11-17 18:17:18.506195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.342  [2024-11-17T18:17:18.868Z] Copying: 512/512 [B] (average 166 kBps) 00:08:20.601 00:08:20.601 18:17:18 -- dd/posix.sh@93 -- # [[ 5yypd59hlx3kcgijcp3ksds61vzdlp104j5czsa4cyyi325mt0w9rln3z55h3jep4ii7grlijb44nqit84hgpsvag0uhggcyihmmapdmvnktu3tuud2u0xjwu0zwjc48tdowq2fi042p00r55ng9p5skuqbroq39abnaj33cwni1psbx39old40d56uaynrrdclfnno437t8c9jwrzmi7otv3c94govtihl8mom0c88z9dzzjkyawe4or6iwu34adfzqkm6xwl237o635fcx172j78x9pusyncltidt09q8h1kt1wx3c5x2e4ns6lrujsinr665pcsub097fpm5it2gqna8t35lv0yvl8yealqazppk3nz300h7ddodytctco91vpqrus1r1ay31669tt2s7cp1n96fpayq9s9mr09tyy7vvoj495p56g9rpp2r79ywou9rs4tgqqixn3gzvg89hzjk57f85pbdys1gx9ggaprcd7lc5xqxetyujqpv9 == \5\y\y\p\d\5\9\h\l\x\3\k\c\g\i\j\c\p\3\k\s\d\s\6\1\v\z\d\l\p\1\0\4\j\5\c\z\s\a\4\c\y\y\i\3\2\5\m\t\0\w\9\r\l\n\3\z\5\5\h\3\j\e\p\4\i\i\7\g\r\l\i\j\b\4\4\n\q\i\t\8\4\h\g\p\s\v\a\g\0\u\h\g\g\c\y\i\h\m\m\a\p\d\m\v\n\k\t\u\3\t\u\u\d\2\u\0\x\j\w\u\0\z\w\j\c\4\8\t\d\o\w\q\2\f\i\0\4\2\p\0\0\r\5\5\n\g\9\p\5\s\k\u\q\b\r\o\q\3\9\a\b\n\a\j\3\3\c\w\n\i\1\p\s\b\x\3\9\o\l\d\4\0\d\5\6\u\a\y\n\r\r\d\c\l\f\n\n\o\4\3\7\t\8\c\9\j\w\r\z\m\i\7\o\t\v\3\c\9\4\g\o\v\t\i\h\l\8\m\o\m\0\c\8\8\z\9\d\z\z\j\k\y\a\w\e\4\o\r\6\i\w\u\3\4\a\d\f\z\q\k\m\6\x\w\l\2\3\7\o\6\3\5\f\c\x\1\7\2\j\7\8\x\9\p\u\s\y\n\c\l\t\i\d\t\0\9\q\8\h\1\k\t\1\w\x\3\c\5\x\2\e\4\n\s\6\l\r\u\j\s\i\n\r\6\6\5\p\c\s\u\b\0\9\7\f\p\m\5\i\t\2\g\q\n\a\8\t\3\5\l\v\0\y\v\l\8\y\e\a\l\q\a\z\p\p\k\3\n\z\3\0\0\h\7\d\d\o\d\y\t\c\t\c\o\9\1\v\p\q\r\u\s\1\r\1\a\y\3\1\6\6\9\t\t\2\s\7\c\p\1\n\9\6\f\p\a\y\q\9\s\9\m\r\0\9\t\y\y\7\v\v\o\j\4\9\5\p\5\6\g\9\r\p\p\2\r\7\9\y\w\o\u\9\r\s\4\t\g\q\q\i\x\n\3\g\z\v\g\8\9\h\z\j\k\5\7\f\8\5\p\b\d\y\s\1\g\x\9\g\g\a\p\r\c\d\7\l\c\5\x\q\x\e\t\y\u\j\q\p\v\9 ]] 00:08:20.601 18:17:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.601 18:17:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:20.601 [2024-11-17 18:17:18.726210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.601 [2024-11-17 18:17:18.726356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70242 ] 00:08:20.601 [2024-11-17 18:17:18.855860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.860 [2024-11-17 18:17:18.896251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.860  [2024-11-17T18:17:19.127Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.860 00:08:20.860 18:17:19 -- dd/posix.sh@93 -- # [[ 5yypd59hlx3kcgijcp3ksds61vzdlp104j5czsa4cyyi325mt0w9rln3z55h3jep4ii7grlijb44nqit84hgpsvag0uhggcyihmmapdmvnktu3tuud2u0xjwu0zwjc48tdowq2fi042p00r55ng9p5skuqbroq39abnaj33cwni1psbx39old40d56uaynrrdclfnno437t8c9jwrzmi7otv3c94govtihl8mom0c88z9dzzjkyawe4or6iwu34adfzqkm6xwl237o635fcx172j78x9pusyncltidt09q8h1kt1wx3c5x2e4ns6lrujsinr665pcsub097fpm5it2gqna8t35lv0yvl8yealqazppk3nz300h7ddodytctco91vpqrus1r1ay31669tt2s7cp1n96fpayq9s9mr09tyy7vvoj495p56g9rpp2r79ywou9rs4tgqqixn3gzvg89hzjk57f85pbdys1gx9ggaprcd7lc5xqxetyujqpv9 == \5\y\y\p\d\5\9\h\l\x\3\k\c\g\i\j\c\p\3\k\s\d\s\6\1\v\z\d\l\p\1\0\4\j\5\c\z\s\a\4\c\y\y\i\3\2\5\m\t\0\w\9\r\l\n\3\z\5\5\h\3\j\e\p\4\i\i\7\g\r\l\i\j\b\4\4\n\q\i\t\8\4\h\g\p\s\v\a\g\0\u\h\g\g\c\y\i\h\m\m\a\p\d\m\v\n\k\t\u\3\t\u\u\d\2\u\0\x\j\w\u\0\z\w\j\c\4\8\t\d\o\w\q\2\f\i\0\4\2\p\0\0\r\5\5\n\g\9\p\5\s\k\u\q\b\r\o\q\3\9\a\b\n\a\j\3\3\c\w\n\i\1\p\s\b\x\3\9\o\l\d\4\0\d\5\6\u\a\y\n\r\r\d\c\l\f\n\n\o\4\3\7\t\8\c\9\j\w\r\z\m\i\7\o\t\v\3\c\9\4\g\o\v\t\i\h\l\8\m\o\m\0\c\8\8\z\9\d\z\z\j\k\y\a\w\e\4\o\r\6\i\w\u\3\4\a\d\f\z\q\k\m\6\x\w\l\2\3\7\o\6\3\5\f\c\x\1\7\2\j\7\8\x\9\p\u\s\y\n\c\l\t\i\d\t\0\9\q\8\h\1\k\t\1\w\x\3\c\5\x\2\e\4\n\s\6\l\r\u\j\s\i\n\r\6\6\5\p\c\s\u\b\0\9\7\f\p\m\5\i\t\2\g\q\n\a\8\t\3\5\l\v\0\y\v\l\8\y\e\a\l\q\a\z\p\p\k\3\n\z\3\0\0\h\7\d\d\o\d\y\t\c\t\c\o\9\1\v\p\q\r\u\s\1\r\1\a\y\3\1\6\6\9\t\t\2\s\7\c\p\1\n\9\6\f\p\a\y\q\9\s\9\m\r\0\9\t\y\y\7\v\v\o\j\4\9\5\p\5\6\g\9\r\p\p\2\r\7\9\y\w\o\u\9\r\s\4\t\g\q\q\i\x\n\3\g\z\v\g\8\9\h\z\j\k\5\7\f\8\5\p\b\d\y\s\1\g\x\9\g\g\a\p\r\c\d\7\l\c\5\x\q\x\e\t\y\u\j\q\p\v\9 ]] 00:08:20.860 00:08:20.860 real 0m3.344s 00:08:20.860 user 0m1.581s 00:08:20.860 sys 0m0.790s 00:08:20.860 18:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.860 ************************************ 00:08:20.860 END TEST dd_flags_misc_forced_aio 00:08:20.860 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:20.860 ************************************ 00:08:21.131 18:17:19 -- dd/posix.sh@1 -- # cleanup 00:08:21.131 18:17:19 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:21.131 18:17:19 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:21.131 00:08:21.131 real 0m16.368s 00:08:21.131 user 0m6.932s 00:08:21.131 sys 0m3.597s 00:08:21.131 18:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.131 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:21.131 ************************************ 00:08:21.131 END TEST spdk_dd_posix 00:08:21.131 ************************************ 00:08:21.131 18:17:19 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:21.131 18:17:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.131 18:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.131 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:21.131 ************************************ 00:08:21.131 START TEST spdk_dd_malloc 00:08:21.131 ************************************ 00:08:21.131 18:17:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:21.131 * Looking for test storage... 00:08:21.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:21.131 18:17:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:21.131 18:17:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:21.131 18:17:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:21.131 18:17:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:21.131 18:17:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:21.131 18:17:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:21.131 18:17:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:21.131 18:17:19 -- scripts/common.sh@335 -- # IFS=.-: 00:08:21.131 18:17:19 -- scripts/common.sh@335 -- # read -ra ver1 00:08:21.131 18:17:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.131 18:17:19 -- scripts/common.sh@336 -- # read -ra ver2 00:08:21.131 18:17:19 -- scripts/common.sh@337 -- # local 'op=<' 00:08:21.131 18:17:19 -- scripts/common.sh@339 -- # ver1_l=2 00:08:21.131 18:17:19 -- scripts/common.sh@340 -- # ver2_l=1 00:08:21.131 18:17:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:21.131 18:17:19 -- scripts/common.sh@343 -- # case "$op" in 00:08:21.131 18:17:19 -- scripts/common.sh@344 -- # : 1 00:08:21.131 18:17:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:21.131 18:17:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.131 18:17:19 -- scripts/common.sh@364 -- # decimal 1 00:08:21.131 18:17:19 -- scripts/common.sh@352 -- # local d=1 00:08:21.131 18:17:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.131 18:17:19 -- scripts/common.sh@354 -- # echo 1 00:08:21.131 18:17:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:21.131 18:17:19 -- scripts/common.sh@365 -- # decimal 2 00:08:21.131 18:17:19 -- scripts/common.sh@352 -- # local d=2 00:08:21.131 18:17:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.131 18:17:19 -- scripts/common.sh@354 -- # echo 2 00:08:21.131 18:17:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:21.131 18:17:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:21.131 18:17:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:21.131 18:17:19 -- scripts/common.sh@367 -- # return 0 00:08:21.131 18:17:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.131 18:17:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:21.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.131 --rc genhtml_branch_coverage=1 00:08:21.131 --rc genhtml_function_coverage=1 00:08:21.131 --rc genhtml_legend=1 00:08:21.131 --rc geninfo_all_blocks=1 00:08:21.131 --rc geninfo_unexecuted_blocks=1 00:08:21.131 00:08:21.131 ' 00:08:21.131 18:17:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:21.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.131 --rc genhtml_branch_coverage=1 00:08:21.131 --rc genhtml_function_coverage=1 00:08:21.131 --rc genhtml_legend=1 00:08:21.131 --rc geninfo_all_blocks=1 00:08:21.131 --rc geninfo_unexecuted_blocks=1 00:08:21.132 00:08:21.132 ' 00:08:21.132 18:17:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:21.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.132 --rc genhtml_branch_coverage=1 00:08:21.132 --rc genhtml_function_coverage=1 00:08:21.132 --rc genhtml_legend=1 00:08:21.132 --rc geninfo_all_blocks=1 00:08:21.132 --rc geninfo_unexecuted_blocks=1 00:08:21.132 00:08:21.132 ' 00:08:21.132 18:17:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:21.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.132 --rc genhtml_branch_coverage=1 00:08:21.132 --rc genhtml_function_coverage=1 00:08:21.132 --rc genhtml_legend=1 00:08:21.132 --rc geninfo_all_blocks=1 00:08:21.132 --rc geninfo_unexecuted_blocks=1 00:08:21.132 00:08:21.132 ' 00:08:21.132 18:17:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.132 18:17:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.132 18:17:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.132 18:17:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.132 18:17:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.132 18:17:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.132 18:17:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.132 18:17:19 -- paths/export.sh@5 -- # export PATH 00:08:21.132 18:17:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.132 18:17:19 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:21.132 18:17:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.132 18:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.132 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:21.132 ************************************ 00:08:21.132 START TEST dd_malloc_copy 00:08:21.132 ************************************ 00:08:21.132 18:17:19 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:21.132 18:17:19 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:21.132 18:17:19 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:21.132 18:17:19 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:21.132 18:17:19 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:21.132 18:17:19 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:21.132 18:17:19 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:21.132 18:17:19 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:21.132 18:17:19 -- dd/malloc.sh@28 -- # gen_conf 00:08:21.132 18:17:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:21.132 18:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:21.397 [2024-11-17 18:17:19.432338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.397 [2024-11-17 18:17:19.432437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:08:21.397 { 00:08:21.397 "subsystems": [ 00:08:21.397 { 00:08:21.397 "subsystem": "bdev", 00:08:21.397 "config": [ 00:08:21.397 { 00:08:21.397 "params": { 00:08:21.397 "block_size": 512, 00:08:21.397 "num_blocks": 1048576, 00:08:21.397 "name": "malloc0" 00:08:21.397 }, 00:08:21.397 "method": "bdev_malloc_create" 00:08:21.397 }, 00:08:21.397 { 00:08:21.397 "params": { 00:08:21.397 "block_size": 512, 00:08:21.397 "num_blocks": 1048576, 00:08:21.397 "name": "malloc1" 00:08:21.397 }, 00:08:21.397 "method": "bdev_malloc_create" 00:08:21.397 }, 00:08:21.397 { 00:08:21.397 "method": "bdev_wait_for_examine" 00:08:21.397 } 00:08:21.397 ] 00:08:21.397 } 00:08:21.397 ] 00:08:21.397 } 00:08:21.397 [2024-11-17 18:17:19.570362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.397 [2024-11-17 18:17:19.611081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.775  [2024-11-17T18:17:21.979Z] Copying: 232/512 [MB] (232 MBps) [2024-11-17T18:17:22.238Z] Copying: 474/512 [MB] (242 MBps) [2024-11-17T18:17:22.498Z] Copying: 512/512 [MB] (average 236 MBps) 00:08:24.231 00:08:24.231 18:17:22 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:24.231 18:17:22 -- dd/malloc.sh@33 -- # gen_conf 00:08:24.231 18:17:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.231 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:08:24.231 [2024-11-17 18:17:22.397583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:24.231 [2024-11-17 18:17:22.397691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70360 ] 00:08:24.231 { 00:08:24.231 "subsystems": [ 00:08:24.231 { 00:08:24.231 "subsystem": "bdev", 00:08:24.231 "config": [ 00:08:24.231 { 00:08:24.231 "params": { 00:08:24.231 "block_size": 512, 00:08:24.231 "num_blocks": 1048576, 00:08:24.231 "name": "malloc0" 00:08:24.231 }, 00:08:24.231 "method": "bdev_malloc_create" 00:08:24.231 }, 00:08:24.231 { 00:08:24.231 "params": { 00:08:24.231 "block_size": 512, 00:08:24.231 "num_blocks": 1048576, 00:08:24.231 "name": "malloc1" 00:08:24.231 }, 00:08:24.231 "method": "bdev_malloc_create" 00:08:24.231 }, 00:08:24.231 { 00:08:24.231 "method": "bdev_wait_for_examine" 00:08:24.231 } 00:08:24.231 ] 00:08:24.231 } 00:08:24.231 ] 00:08:24.231 } 00:08:24.493 [2024-11-17 18:17:22.534725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.493 [2024-11-17 18:17:22.564648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.874  [2024-11-17T18:17:25.079Z] Copying: 243/512 [MB] (243 MBps) [2024-11-17T18:17:25.079Z] Copying: 492/512 [MB] (249 MBps) [2024-11-17T18:17:25.338Z] Copying: 512/512 [MB] (average 245 MBps) 00:08:27.071 00:08:27.071 00:08:27.071 real 0m5.793s 00:08:27.071 user 0m5.111s 00:08:27.071 sys 0m0.532s 00:08:27.071 18:17:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.071 ************************************ 00:08:27.071 END TEST dd_malloc_copy 00:08:27.071 ************************************ 00:08:27.071 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 ************************************ 00:08:27.071 END TEST spdk_dd_malloc 00:08:27.071 ************************************ 00:08:27.071 00:08:27.071 real 0m6.026s 00:08:27.071 user 0m5.240s 00:08:27.071 sys 0m0.643s 00:08:27.071 18:17:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.071 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 18:17:25 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:27.071 18:17:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:27.071 18:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.071 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 ************************************ 00:08:27.071 START TEST spdk_dd_bdev_to_bdev 00:08:27.071 ************************************ 00:08:27.071 18:17:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:27.331 * Looking for test storage... 00:08:27.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:27.331 18:17:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.331 18:17:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.331 18:17:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.331 18:17:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.331 18:17:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.331 18:17:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.331 18:17:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.331 18:17:25 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.331 18:17:25 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.331 18:17:25 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.331 18:17:25 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.331 18:17:25 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.331 18:17:25 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.331 18:17:25 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.331 18:17:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.331 18:17:25 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.331 18:17:25 -- scripts/common.sh@344 -- # : 1 00:08:27.331 18:17:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.331 18:17:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.331 18:17:25 -- scripts/common.sh@364 -- # decimal 1 00:08:27.331 18:17:25 -- scripts/common.sh@352 -- # local d=1 00:08:27.331 18:17:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.331 18:17:25 -- scripts/common.sh@354 -- # echo 1 00:08:27.331 18:17:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.331 18:17:25 -- scripts/common.sh@365 -- # decimal 2 00:08:27.331 18:17:25 -- scripts/common.sh@352 -- # local d=2 00:08:27.331 18:17:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.331 18:17:25 -- scripts/common.sh@354 -- # echo 2 00:08:27.331 18:17:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.331 18:17:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.331 18:17:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.331 18:17:25 -- scripts/common.sh@367 -- # return 0 00:08:27.331 18:17:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.331 18:17:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.331 --rc genhtml_branch_coverage=1 00:08:27.331 --rc genhtml_function_coverage=1 00:08:27.331 --rc genhtml_legend=1 00:08:27.331 --rc geninfo_all_blocks=1 00:08:27.331 --rc geninfo_unexecuted_blocks=1 00:08:27.331 00:08:27.331 ' 00:08:27.331 18:17:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.331 --rc genhtml_branch_coverage=1 00:08:27.331 --rc genhtml_function_coverage=1 00:08:27.331 --rc genhtml_legend=1 00:08:27.331 --rc geninfo_all_blocks=1 00:08:27.331 --rc geninfo_unexecuted_blocks=1 00:08:27.331 00:08:27.331 ' 00:08:27.331 18:17:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.331 --rc genhtml_branch_coverage=1 00:08:27.331 --rc genhtml_function_coverage=1 00:08:27.331 --rc genhtml_legend=1 00:08:27.331 --rc geninfo_all_blocks=1 00:08:27.331 --rc geninfo_unexecuted_blocks=1 00:08:27.331 00:08:27.331 ' 00:08:27.331 18:17:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.331 --rc genhtml_branch_coverage=1 00:08:27.331 --rc genhtml_function_coverage=1 00:08:27.331 --rc genhtml_legend=1 00:08:27.331 --rc geninfo_all_blocks=1 00:08:27.331 --rc geninfo_unexecuted_blocks=1 00:08:27.331 00:08:27.331 ' 00:08:27.331 18:17:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.331 18:17:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.331 18:17:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.331 18:17:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.331 18:17:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.331 18:17:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.331 18:17:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.331 18:17:25 -- paths/export.sh@5 -- # export PATH 00:08:27.331 18:17:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:27.331 18:17:25 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:27.331 18:17:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:27.331 18:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.331 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 ************************************ 00:08:27.331 START TEST dd_inflate_file 00:08:27.331 ************************************ 00:08:27.331 18:17:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:27.331 [2024-11-17 18:17:25.512398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.331 [2024-11-17 18:17:25.512492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70466 ] 00:08:27.591 [2024-11-17 18:17:25.650717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.591 [2024-11-17 18:17:25.689083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.591  [2024-11-17T18:17:26.117Z] Copying: 64/64 [MB] (average 1684 MBps) 00:08:27.850 00:08:27.850 00:08:27.850 real 0m0.480s 00:08:27.850 user 0m0.227s 00:08:27.850 sys 0m0.133s 00:08:27.850 18:17:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.850 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.850 ************************************ 00:08:27.850 END TEST dd_inflate_file 00:08:27.850 ************************************ 00:08:27.850 18:17:25 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:27.850 18:17:25 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:27.850 18:17:25 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:27.850 18:17:25 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:27.850 18:17:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:27.850 18:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.850 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.850 18:17:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.850 18:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.850 ************************************ 00:08:27.850 START TEST dd_copy_to_out_bdev 00:08:27.850 ************************************ 00:08:27.850 18:17:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:27.850 [2024-11-17 18:17:26.048386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.850 [2024-11-17 18:17:26.048497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70497 ] 00:08:27.850 { 00:08:27.850 "subsystems": [ 00:08:27.850 { 00:08:27.850 "subsystem": "bdev", 00:08:27.850 "config": [ 00:08:27.850 { 00:08:27.851 "params": { 00:08:27.851 "trtype": "pcie", 00:08:27.851 "traddr": "0000:00:06.0", 00:08:27.851 "name": "Nvme0" 00:08:27.851 }, 00:08:27.851 "method": "bdev_nvme_attach_controller" 00:08:27.851 }, 00:08:27.851 { 00:08:27.851 "params": { 00:08:27.851 "trtype": "pcie", 00:08:27.851 "traddr": "0000:00:07.0", 00:08:27.851 "name": "Nvme1" 00:08:27.851 }, 00:08:27.851 "method": "bdev_nvme_attach_controller" 00:08:27.851 }, 00:08:27.851 { 00:08:27.851 "method": "bdev_wait_for_examine" 00:08:27.851 } 00:08:27.851 ] 00:08:27.851 } 00:08:27.851 ] 00:08:27.851 } 00:08:28.110 [2024-11-17 18:17:26.188252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.110 [2024-11-17 18:17:26.226483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.489  [2024-11-17T18:17:28.015Z] Copying: 46/64 [MB] (46 MBps) [2024-11-17T18:17:28.015Z] Copying: 64/64 [MB] (average 46 MBps) 00:08:29.748 00:08:29.748 00:08:29.748 real 0m1.973s 00:08:29.748 user 0m1.728s 00:08:29.748 sys 0m0.172s 00:08:29.748 18:17:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.748 ************************************ 00:08:29.748 END TEST dd_copy_to_out_bdev 00:08:29.748 ************************************ 00:08:29.748 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:30.008 18:17:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:30.008 18:17:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.008 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:30.008 ************************************ 00:08:30.008 START TEST dd_offset_magic 00:08:30.008 ************************************ 00:08:30.008 18:17:28 -- common/autotest_common.sh@1114 -- # offset_magic 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:30.008 18:17:28 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:30.008 18:17:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:30.008 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:30.008 [2024-11-17 18:17:28.079586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:30.008 [2024-11-17 18:17:28.079703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70544 ] 00:08:30.008 { 00:08:30.008 "subsystems": [ 00:08:30.008 { 00:08:30.008 "subsystem": "bdev", 00:08:30.008 "config": [ 00:08:30.008 { 00:08:30.008 "params": { 00:08:30.008 "trtype": "pcie", 00:08:30.008 "traddr": "0000:00:06.0", 00:08:30.008 "name": "Nvme0" 00:08:30.008 }, 00:08:30.008 "method": "bdev_nvme_attach_controller" 00:08:30.008 }, 00:08:30.008 { 00:08:30.008 "params": { 00:08:30.008 "trtype": "pcie", 00:08:30.008 "traddr": "0000:00:07.0", 00:08:30.008 "name": "Nvme1" 00:08:30.008 }, 00:08:30.008 "method": "bdev_nvme_attach_controller" 00:08:30.008 }, 00:08:30.008 { 00:08:30.008 "method": "bdev_wait_for_examine" 00:08:30.008 } 00:08:30.008 ] 00:08:30.008 } 00:08:30.008 ] 00:08:30.008 } 00:08:30.008 [2024-11-17 18:17:28.218333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.008 [2024-11-17 18:17:28.259000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.269  [2024-11-17T18:17:28.795Z] Copying: 65/65 [MB] (average 822 MBps) 00:08:30.528 00:08:30.529 18:17:28 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:30.529 18:17:28 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:30.529 18:17:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:30.529 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:08:30.529 [2024-11-17 18:17:28.754346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:30.529 [2024-11-17 18:17:28.754452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70559 ] 00:08:30.529 { 00:08:30.529 "subsystems": [ 00:08:30.529 { 00:08:30.529 "subsystem": "bdev", 00:08:30.529 "config": [ 00:08:30.529 { 00:08:30.529 "params": { 00:08:30.529 "trtype": "pcie", 00:08:30.529 "traddr": "0000:00:06.0", 00:08:30.529 "name": "Nvme0" 00:08:30.529 }, 00:08:30.529 "method": "bdev_nvme_attach_controller" 00:08:30.529 }, 00:08:30.529 { 00:08:30.529 "params": { 00:08:30.529 "trtype": "pcie", 00:08:30.529 "traddr": "0000:00:07.0", 00:08:30.529 "name": "Nvme1" 00:08:30.529 }, 00:08:30.529 "method": "bdev_nvme_attach_controller" 00:08:30.529 }, 00:08:30.529 { 00:08:30.529 "method": "bdev_wait_for_examine" 00:08:30.529 } 00:08:30.529 ] 00:08:30.529 } 00:08:30.529 ] 00:08:30.529 } 00:08:30.788 [2024-11-17 18:17:28.891085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.788 [2024-11-17 18:17:28.923103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.047  [2024-11-17T18:17:29.314Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:31.047 00:08:31.047 18:17:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:31.047 18:17:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:31.047 18:17:29 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:31.047 18:17:29 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:31.047 18:17:29 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:31.047 18:17:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.047 18:17:29 -- common/autotest_common.sh@10 -- # set +x 00:08:31.047 [2024-11-17 18:17:29.301259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.047 [2024-11-17 18:17:29.301382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70579 ] 00:08:31.307 { 00:08:31.307 "subsystems": [ 00:08:31.307 { 00:08:31.307 "subsystem": "bdev", 00:08:31.307 "config": [ 00:08:31.307 { 00:08:31.307 "params": { 00:08:31.307 "trtype": "pcie", 00:08:31.307 "traddr": "0000:00:06.0", 00:08:31.307 "name": "Nvme0" 00:08:31.307 }, 00:08:31.307 "method": "bdev_nvme_attach_controller" 00:08:31.307 }, 00:08:31.307 { 00:08:31.307 "params": { 00:08:31.307 "trtype": "pcie", 00:08:31.307 "traddr": "0000:00:07.0", 00:08:31.307 "name": "Nvme1" 00:08:31.307 }, 00:08:31.307 "method": "bdev_nvme_attach_controller" 00:08:31.307 }, 00:08:31.307 { 00:08:31.307 "method": "bdev_wait_for_examine" 00:08:31.307 } 00:08:31.307 ] 00:08:31.307 } 00:08:31.307 ] 00:08:31.307 } 00:08:31.307 [2024-11-17 18:17:29.437961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.307 [2024-11-17 18:17:29.468582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.566  [2024-11-17T18:17:30.092Z] Copying: 65/65 [MB] (average 942 MBps) 00:08:31.825 00:08:31.825 18:17:29 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:31.825 18:17:29 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:31.825 18:17:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.825 18:17:29 -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 [2024-11-17 18:17:29.938734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.825 [2024-11-17 18:17:29.938831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70588 ] 00:08:31.825 { 00:08:31.825 "subsystems": [ 00:08:31.825 { 00:08:31.825 "subsystem": "bdev", 00:08:31.825 "config": [ 00:08:31.825 { 00:08:31.825 "params": { 00:08:31.825 "trtype": "pcie", 00:08:31.825 "traddr": "0000:00:06.0", 00:08:31.825 "name": "Nvme0" 00:08:31.825 }, 00:08:31.825 "method": "bdev_nvme_attach_controller" 00:08:31.825 }, 00:08:31.825 { 00:08:31.825 "params": { 00:08:31.825 "trtype": "pcie", 00:08:31.825 "traddr": "0000:00:07.0", 00:08:31.825 "name": "Nvme1" 00:08:31.825 }, 00:08:31.825 "method": "bdev_nvme_attach_controller" 00:08:31.825 }, 00:08:31.825 { 00:08:31.825 "method": "bdev_wait_for_examine" 00:08:31.825 } 00:08:31.825 ] 00:08:31.825 } 00:08:31.825 ] 00:08:31.825 } 00:08:31.825 [2024-11-17 18:17:30.074725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.084 [2024-11-17 18:17:30.111601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.084  [2024-11-17T18:17:30.610Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:32.343 00:08:32.343 18:17:30 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:32.343 18:17:30 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:32.343 00:08:32.343 real 0m2.416s 00:08:32.343 user 0m1.756s 00:08:32.343 sys 0m0.473s 00:08:32.343 18:17:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.343 ************************************ 00:08:32.343 END TEST dd_offset_magic 00:08:32.343 ************************************ 00:08:32.343 18:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.343 18:17:30 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:32.343 18:17:30 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:32.343 18:17:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:32.343 18:17:30 -- dd/common.sh@11 -- # local nvme_ref= 00:08:32.343 18:17:30 -- dd/common.sh@12 -- # local size=4194330 00:08:32.343 18:17:30 -- dd/common.sh@14 -- # local bs=1048576 00:08:32.343 18:17:30 -- dd/common.sh@15 -- # local count=5 00:08:32.343 18:17:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:32.343 18:17:30 -- dd/common.sh@18 -- # gen_conf 00:08:32.343 18:17:30 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.343 18:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.343 [2024-11-17 18:17:30.540740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.343 [2024-11-17 18:17:30.540847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70623 ] 00:08:32.343 { 00:08:32.343 "subsystems": [ 00:08:32.343 { 00:08:32.343 "subsystem": "bdev", 00:08:32.343 "config": [ 00:08:32.343 { 00:08:32.343 "params": { 00:08:32.343 "trtype": "pcie", 00:08:32.343 "traddr": "0000:00:06.0", 00:08:32.343 "name": "Nvme0" 00:08:32.343 }, 00:08:32.343 "method": "bdev_nvme_attach_controller" 00:08:32.343 }, 00:08:32.343 { 00:08:32.343 "params": { 00:08:32.343 "trtype": "pcie", 00:08:32.343 "traddr": "0000:00:07.0", 00:08:32.343 "name": "Nvme1" 00:08:32.343 }, 00:08:32.343 "method": "bdev_nvme_attach_controller" 00:08:32.343 }, 00:08:32.343 { 00:08:32.343 "method": "bdev_wait_for_examine" 00:08:32.343 } 00:08:32.343 ] 00:08:32.343 } 00:08:32.343 ] 00:08:32.343 } 00:08:32.602 [2024-11-17 18:17:30.679375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.602 [2024-11-17 18:17:30.710186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.861  [2024-11-17T18:17:31.128Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:32.861 00:08:32.861 18:17:31 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:32.861 18:17:31 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:32.861 18:17:31 -- dd/common.sh@11 -- # local nvme_ref= 00:08:32.861 18:17:31 -- dd/common.sh@12 -- # local size=4194330 00:08:32.861 18:17:31 -- dd/common.sh@14 -- # local bs=1048576 00:08:32.861 18:17:31 -- dd/common.sh@15 -- # local count=5 00:08:32.861 18:17:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:32.861 18:17:31 -- dd/common.sh@18 -- # gen_conf 00:08:32.861 18:17:31 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.861 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:32.861 [2024-11-17 18:17:31.082296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.861 [2024-11-17 18:17:31.082388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70632 ] 00:08:32.861 { 00:08:32.861 "subsystems": [ 00:08:32.861 { 00:08:32.861 "subsystem": "bdev", 00:08:32.861 "config": [ 00:08:32.861 { 00:08:32.861 "params": { 00:08:32.861 "trtype": "pcie", 00:08:32.861 "traddr": "0000:00:06.0", 00:08:32.861 "name": "Nvme0" 00:08:32.861 }, 00:08:32.861 "method": "bdev_nvme_attach_controller" 00:08:32.861 }, 00:08:32.861 { 00:08:32.861 "params": { 00:08:32.861 "trtype": "pcie", 00:08:32.861 "traddr": "0000:00:07.0", 00:08:32.861 "name": "Nvme1" 00:08:32.861 }, 00:08:32.861 "method": "bdev_nvme_attach_controller" 00:08:32.861 }, 00:08:32.861 { 00:08:32.861 "method": "bdev_wait_for_examine" 00:08:32.861 } 00:08:32.862 ] 00:08:32.862 } 00:08:32.862 ] 00:08:32.862 } 00:08:33.120 [2024-11-17 18:17:31.218025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.120 [2024-11-17 18:17:31.251641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.379  [2024-11-17T18:17:31.646Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:33.379 00:08:33.379 18:17:31 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:33.379 00:08:33.379 real 0m6.331s 00:08:33.379 user 0m4.654s 00:08:33.379 sys 0m1.190s 00:08:33.379 18:17:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.379 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.379 ************************************ 00:08:33.379 END TEST spdk_dd_bdev_to_bdev 00:08:33.379 ************************************ 00:08:33.379 18:17:31 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:33.379 18:17:31 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:33.379 18:17:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.379 18:17:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.379 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.638 ************************************ 00:08:33.638 START TEST spdk_dd_uring 00:08:33.638 ************************************ 00:08:33.638 18:17:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:33.638 * Looking for test storage... 00:08:33.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:33.638 18:17:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:33.639 18:17:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:33.639 18:17:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:33.639 18:17:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:33.639 18:17:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:33.639 18:17:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:33.639 18:17:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:33.639 18:17:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:33.639 18:17:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:33.639 18:17:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.639 18:17:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:33.639 18:17:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:33.639 18:17:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:33.639 18:17:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:33.639 18:17:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:33.639 18:17:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:33.639 18:17:31 -- scripts/common.sh@344 -- # : 1 00:08:33.639 18:17:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:33.639 18:17:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.639 18:17:31 -- scripts/common.sh@364 -- # decimal 1 00:08:33.639 18:17:31 -- scripts/common.sh@352 -- # local d=1 00:08:33.639 18:17:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.639 18:17:31 -- scripts/common.sh@354 -- # echo 1 00:08:33.639 18:17:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:33.639 18:17:31 -- scripts/common.sh@365 -- # decimal 2 00:08:33.639 18:17:31 -- scripts/common.sh@352 -- # local d=2 00:08:33.639 18:17:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.639 18:17:31 -- scripts/common.sh@354 -- # echo 2 00:08:33.639 18:17:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:33.639 18:17:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:33.639 18:17:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:33.639 18:17:31 -- scripts/common.sh@367 -- # return 0 00:08:33.639 18:17:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.639 18:17:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:33.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.639 --rc genhtml_branch_coverage=1 00:08:33.639 --rc genhtml_function_coverage=1 00:08:33.639 --rc genhtml_legend=1 00:08:33.639 --rc geninfo_all_blocks=1 00:08:33.639 --rc geninfo_unexecuted_blocks=1 00:08:33.639 00:08:33.639 ' 00:08:33.639 18:17:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:33.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.639 --rc genhtml_branch_coverage=1 00:08:33.639 --rc genhtml_function_coverage=1 00:08:33.639 --rc genhtml_legend=1 00:08:33.639 --rc geninfo_all_blocks=1 00:08:33.639 --rc geninfo_unexecuted_blocks=1 00:08:33.639 00:08:33.639 ' 00:08:33.639 18:17:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:33.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.639 --rc genhtml_branch_coverage=1 00:08:33.639 --rc genhtml_function_coverage=1 00:08:33.639 --rc genhtml_legend=1 00:08:33.639 --rc geninfo_all_blocks=1 00:08:33.639 --rc geninfo_unexecuted_blocks=1 00:08:33.639 00:08:33.639 ' 00:08:33.639 18:17:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:33.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.639 --rc genhtml_branch_coverage=1 00:08:33.639 --rc genhtml_function_coverage=1 00:08:33.639 --rc genhtml_legend=1 00:08:33.639 --rc geninfo_all_blocks=1 00:08:33.639 --rc geninfo_unexecuted_blocks=1 00:08:33.639 00:08:33.639 ' 00:08:33.639 18:17:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.639 18:17:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.639 18:17:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.639 18:17:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.639 18:17:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.639 18:17:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.639 18:17:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.639 18:17:31 -- paths/export.sh@5 -- # export PATH 00:08:33.639 18:17:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.639 18:17:31 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:33.639 18:17:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.639 18:17:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.639 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.639 ************************************ 00:08:33.639 START TEST dd_uring_copy 00:08:33.639 ************************************ 00:08:33.639 18:17:31 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:33.639 18:17:31 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:33.639 18:17:31 -- dd/uring.sh@16 -- # local magic 00:08:33.639 18:17:31 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:33.639 18:17:31 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:33.639 18:17:31 -- dd/uring.sh@19 -- # local verify_magic 00:08:33.639 18:17:31 -- dd/uring.sh@21 -- # init_zram 00:08:33.639 18:17:31 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:33.639 18:17:31 -- dd/common.sh@164 -- # return 00:08:33.639 18:17:31 -- dd/uring.sh@22 -- # create_zram_dev 00:08:33.639 18:17:31 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:33.639 18:17:31 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:33.639 18:17:31 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:33.639 18:17:31 -- dd/common.sh@181 -- # local id=1 00:08:33.639 18:17:31 -- dd/common.sh@182 -- # local size=512M 00:08:33.639 18:17:31 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:33.639 18:17:31 -- dd/common.sh@186 -- # echo 512M 00:08:33.639 18:17:31 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:33.639 18:17:31 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:33.639 18:17:31 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:33.639 18:17:31 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:33.639 18:17:31 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:33.639 18:17:31 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:33.639 18:17:31 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:33.639 18:17:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:33.639 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.639 18:17:31 -- dd/uring.sh@41 -- # magic=lk2k87slfvb6ax9gwx81xatgrpalxhb9l7skj9w2p3mws25buypb6rgo6fd56ka3l4jjts1qs2stw205qnptky21og8kamsgxlo202mhxide3sjnwv4ic2fsx64h249ap6uhiq7202gtghn3vf4v7iyk4d1vna6ol0a4bl1ai7fzhs3hj6c7vpwlt5zwobkv846cf3fdgtnb5ytggfhf8w5m3b7a4omnx0rvxt6eprsbzfij8o7om1a4qck0cd8q890u1qenfmeksvjafaui67sk8w4xqhlreq1sa6oqbupyquqld8j8dnprwpobi2l1y2qrpcaspazmm1fhya3ojc3mvdfgqyoh6mcmzsg531mvusq24mm6we7uar3k42av3vvv8ujgndj3ai69npl61io4gmhzqq7zwz8pp9me3tsdujjc5i23vjax3zqkhl0zd0wi9ldtkdv33zvo09l99th9lpcdepekqki0p17au747pgkfejnrzwm5n3q57f4uhmibvjqlfbvx0z3z6c3kw4xn8wfo5f981n2a3s0z6ewno7xfm43dame9zu3ulagg2ezifwkcrvvq5dsg8ghhhg22upmskdp513s9ggaogj7v8f6267m3f2daqbnj7qc3i7cus6q9gwif9h56gj12vjgalmtnpyg4abziiev60o0h1g4m3244h1icrvarn052iirdwoi61yvi1e71hvnew7ih210zds4hsl2y9fmtj3vbeu7qf6ig2zch92m8zhglzrcjq00dh9g7ll9ijtmskajiotuupesesu3z96u8njobxm2crkrd9vs8u0rlmzso7riaal78osded3wxyjiwd4o5mngzd84bsyn0azzxgmrzf2i2eqic1xg1tmsjjiphds40oa6ve038m68jwwnab8r5nznmm66h2c26z6ijod1rxvpaitmgwxv3pbnntv3r7hvh1mv4yya1jm3fuk2zbgsaytz9elcyrozlzsxrn5xnt1aju1o5o21fpcumbdho 00:08:33.639 18:17:31 -- dd/uring.sh@42 -- # echo lk2k87slfvb6ax9gwx81xatgrpalxhb9l7skj9w2p3mws25buypb6rgo6fd56ka3l4jjts1qs2stw205qnptky21og8kamsgxlo202mhxide3sjnwv4ic2fsx64h249ap6uhiq7202gtghn3vf4v7iyk4d1vna6ol0a4bl1ai7fzhs3hj6c7vpwlt5zwobkv846cf3fdgtnb5ytggfhf8w5m3b7a4omnx0rvxt6eprsbzfij8o7om1a4qck0cd8q890u1qenfmeksvjafaui67sk8w4xqhlreq1sa6oqbupyquqld8j8dnprwpobi2l1y2qrpcaspazmm1fhya3ojc3mvdfgqyoh6mcmzsg531mvusq24mm6we7uar3k42av3vvv8ujgndj3ai69npl61io4gmhzqq7zwz8pp9me3tsdujjc5i23vjax3zqkhl0zd0wi9ldtkdv33zvo09l99th9lpcdepekqki0p17au747pgkfejnrzwm5n3q57f4uhmibvjqlfbvx0z3z6c3kw4xn8wfo5f981n2a3s0z6ewno7xfm43dame9zu3ulagg2ezifwkcrvvq5dsg8ghhhg22upmskdp513s9ggaogj7v8f6267m3f2daqbnj7qc3i7cus6q9gwif9h56gj12vjgalmtnpyg4abziiev60o0h1g4m3244h1icrvarn052iirdwoi61yvi1e71hvnew7ih210zds4hsl2y9fmtj3vbeu7qf6ig2zch92m8zhglzrcjq00dh9g7ll9ijtmskajiotuupesesu3z96u8njobxm2crkrd9vs8u0rlmzso7riaal78osded3wxyjiwd4o5mngzd84bsyn0azzxgmrzf2i2eqic1xg1tmsjjiphds40oa6ve038m68jwwnab8r5nznmm66h2c26z6ijod1rxvpaitmgwxv3pbnntv3r7hvh1mv4yya1jm3fuk2zbgsaytz9elcyrozlzsxrn5xnt1aju1o5o21fpcumbdho 00:08:33.639 18:17:31 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:33.898 [2024-11-17 18:17:31.921570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:33.898 [2024-11-17 18:17:31.921655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70708 ] 00:08:33.898 [2024-11-17 18:17:32.060976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.898 [2024-11-17 18:17:32.100060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.467  [2024-11-17T18:17:32.992Z] Copying: 511/511 [MB] (average 1630 MBps) 00:08:34.726 00:08:34.726 18:17:32 -- dd/uring.sh@54 -- # gen_conf 00:08:34.726 18:17:32 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:34.726 18:17:32 -- dd/common.sh@31 -- # xtrace_disable 00:08:34.726 18:17:32 -- common/autotest_common.sh@10 -- # set +x 00:08:34.726 [2024-11-17 18:17:32.826771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:34.726 [2024-11-17 18:17:32.826863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:08:34.726 { 00:08:34.726 "subsystems": [ 00:08:34.726 { 00:08:34.726 "subsystem": "bdev", 00:08:34.726 "config": [ 00:08:34.726 { 00:08:34.726 "params": { 00:08:34.726 "block_size": 512, 00:08:34.726 "num_blocks": 1048576, 00:08:34.726 "name": "malloc0" 00:08:34.726 }, 00:08:34.726 "method": "bdev_malloc_create" 00:08:34.726 }, 00:08:34.726 { 00:08:34.726 "params": { 00:08:34.726 "filename": "/dev/zram1", 00:08:34.726 "name": "uring0" 00:08:34.726 }, 00:08:34.726 "method": "bdev_uring_create" 00:08:34.726 }, 00:08:34.726 { 00:08:34.726 "method": "bdev_wait_for_examine" 00:08:34.726 } 00:08:34.726 ] 00:08:34.726 } 00:08:34.726 ] 00:08:34.726 } 00:08:34.726 [2024-11-17 18:17:32.963255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.984 [2024-11-17 18:17:32.996955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.921  [2024-11-17T18:17:35.566Z] Copying: 243/512 [MB] (243 MBps) [2024-11-17T18:17:35.566Z] Copying: 487/512 [MB] (243 MBps) [2024-11-17T18:17:35.566Z] Copying: 512/512 [MB] (average 244 MBps) 00:08:37.299 00:08:37.299 18:17:35 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:37.299 18:17:35 -- dd/uring.sh@60 -- # gen_conf 00:08:37.299 18:17:35 -- dd/common.sh@31 -- # xtrace_disable 00:08:37.299 18:17:35 -- common/autotest_common.sh@10 -- # set +x 00:08:37.299 [2024-11-17 18:17:35.519366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:37.299 [2024-11-17 18:17:35.519457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70765 ] 00:08:37.299 { 00:08:37.299 "subsystems": [ 00:08:37.299 { 00:08:37.299 "subsystem": "bdev", 00:08:37.299 "config": [ 00:08:37.299 { 00:08:37.299 "params": { 00:08:37.299 "block_size": 512, 00:08:37.299 "num_blocks": 1048576, 00:08:37.299 "name": "malloc0" 00:08:37.299 }, 00:08:37.299 "method": "bdev_malloc_create" 00:08:37.299 }, 00:08:37.299 { 00:08:37.299 "params": { 00:08:37.299 "filename": "/dev/zram1", 00:08:37.299 "name": "uring0" 00:08:37.299 }, 00:08:37.299 "method": "bdev_uring_create" 00:08:37.299 }, 00:08:37.299 { 00:08:37.299 "method": "bdev_wait_for_examine" 00:08:37.299 } 00:08:37.299 ] 00:08:37.299 } 00:08:37.299 ] 00:08:37.299 } 00:08:37.558 [2024-11-17 18:17:35.648002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.558 [2024-11-17 18:17:35.678274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.937  [2024-11-17T18:17:38.141Z] Copying: 163/512 [MB] (163 MBps) [2024-11-17T18:17:39.077Z] Copying: 317/512 [MB] (154 MBps) [2024-11-17T18:17:39.336Z] Copying: 469/512 [MB] (151 MBps) [2024-11-17T18:17:39.594Z] Copying: 512/512 [MB] (average 152 MBps) 00:08:41.327 00:08:41.327 18:17:39 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:41.327 18:17:39 -- dd/uring.sh@66 -- # [[ lk2k87slfvb6ax9gwx81xatgrpalxhb9l7skj9w2p3mws25buypb6rgo6fd56ka3l4jjts1qs2stw205qnptky21og8kamsgxlo202mhxide3sjnwv4ic2fsx64h249ap6uhiq7202gtghn3vf4v7iyk4d1vna6ol0a4bl1ai7fzhs3hj6c7vpwlt5zwobkv846cf3fdgtnb5ytggfhf8w5m3b7a4omnx0rvxt6eprsbzfij8o7om1a4qck0cd8q890u1qenfmeksvjafaui67sk8w4xqhlreq1sa6oqbupyquqld8j8dnprwpobi2l1y2qrpcaspazmm1fhya3ojc3mvdfgqyoh6mcmzsg531mvusq24mm6we7uar3k42av3vvv8ujgndj3ai69npl61io4gmhzqq7zwz8pp9me3tsdujjc5i23vjax3zqkhl0zd0wi9ldtkdv33zvo09l99th9lpcdepekqki0p17au747pgkfejnrzwm5n3q57f4uhmibvjqlfbvx0z3z6c3kw4xn8wfo5f981n2a3s0z6ewno7xfm43dame9zu3ulagg2ezifwkcrvvq5dsg8ghhhg22upmskdp513s9ggaogj7v8f6267m3f2daqbnj7qc3i7cus6q9gwif9h56gj12vjgalmtnpyg4abziiev60o0h1g4m3244h1icrvarn052iirdwoi61yvi1e71hvnew7ih210zds4hsl2y9fmtj3vbeu7qf6ig2zch92m8zhglzrcjq00dh9g7ll9ijtmskajiotuupesesu3z96u8njobxm2crkrd9vs8u0rlmzso7riaal78osded3wxyjiwd4o5mngzd84bsyn0azzxgmrzf2i2eqic1xg1tmsjjiphds40oa6ve038m68jwwnab8r5nznmm66h2c26z6ijod1rxvpaitmgwxv3pbnntv3r7hvh1mv4yya1jm3fuk2zbgsaytz9elcyrozlzsxrn5xnt1aju1o5o21fpcumbdho == \l\k\2\k\8\7\s\l\f\v\b\6\a\x\9\g\w\x\8\1\x\a\t\g\r\p\a\l\x\h\b\9\l\7\s\k\j\9\w\2\p\3\m\w\s\2\5\b\u\y\p\b\6\r\g\o\6\f\d\5\6\k\a\3\l\4\j\j\t\s\1\q\s\2\s\t\w\2\0\5\q\n\p\t\k\y\2\1\o\g\8\k\a\m\s\g\x\l\o\2\0\2\m\h\x\i\d\e\3\s\j\n\w\v\4\i\c\2\f\s\x\6\4\h\2\4\9\a\p\6\u\h\i\q\7\2\0\2\g\t\g\h\n\3\v\f\4\v\7\i\y\k\4\d\1\v\n\a\6\o\l\0\a\4\b\l\1\a\i\7\f\z\h\s\3\h\j\6\c\7\v\p\w\l\t\5\z\w\o\b\k\v\8\4\6\c\f\3\f\d\g\t\n\b\5\y\t\g\g\f\h\f\8\w\5\m\3\b\7\a\4\o\m\n\x\0\r\v\x\t\6\e\p\r\s\b\z\f\i\j\8\o\7\o\m\1\a\4\q\c\k\0\c\d\8\q\8\9\0\u\1\q\e\n\f\m\e\k\s\v\j\a\f\a\u\i\6\7\s\k\8\w\4\x\q\h\l\r\e\q\1\s\a\6\o\q\b\u\p\y\q\u\q\l\d\8\j\8\d\n\p\r\w\p\o\b\i\2\l\1\y\2\q\r\p\c\a\s\p\a\z\m\m\1\f\h\y\a\3\o\j\c\3\m\v\d\f\g\q\y\o\h\6\m\c\m\z\s\g\5\3\1\m\v\u\s\q\2\4\m\m\6\w\e\7\u\a\r\3\k\4\2\a\v\3\v\v\v\8\u\j\g\n\d\j\3\a\i\6\9\n\p\l\6\1\i\o\4\g\m\h\z\q\q\7\z\w\z\8\p\p\9\m\e\3\t\s\d\u\j\j\c\5\i\2\3\v\j\a\x\3\z\q\k\h\l\0\z\d\0\w\i\9\l\d\t\k\d\v\3\3\z\v\o\0\9\l\9\9\t\h\9\l\p\c\d\e\p\e\k\q\k\i\0\p\1\7\a\u\7\4\7\p\g\k\f\e\j\n\r\z\w\m\5\n\3\q\5\7\f\4\u\h\m\i\b\v\j\q\l\f\b\v\x\0\z\3\z\6\c\3\k\w\4\x\n\8\w\f\o\5\f\9\8\1\n\2\a\3\s\0\z\6\e\w\n\o\7\x\f\m\4\3\d\a\m\e\9\z\u\3\u\l\a\g\g\2\e\z\i\f\w\k\c\r\v\v\q\5\d\s\g\8\g\h\h\h\g\2\2\u\p\m\s\k\d\p\5\1\3\s\9\g\g\a\o\g\j\7\v\8\f\6\2\6\7\m\3\f\2\d\a\q\b\n\j\7\q\c\3\i\7\c\u\s\6\q\9\g\w\i\f\9\h\5\6\g\j\1\2\v\j\g\a\l\m\t\n\p\y\g\4\a\b\z\i\i\e\v\6\0\o\0\h\1\g\4\m\3\2\4\4\h\1\i\c\r\v\a\r\n\0\5\2\i\i\r\d\w\o\i\6\1\y\v\i\1\e\7\1\h\v\n\e\w\7\i\h\2\1\0\z\d\s\4\h\s\l\2\y\9\f\m\t\j\3\v\b\e\u\7\q\f\6\i\g\2\z\c\h\9\2\m\8\z\h\g\l\z\r\c\j\q\0\0\d\h\9\g\7\l\l\9\i\j\t\m\s\k\a\j\i\o\t\u\u\p\e\s\e\s\u\3\z\9\6\u\8\n\j\o\b\x\m\2\c\r\k\r\d\9\v\s\8\u\0\r\l\m\z\s\o\7\r\i\a\a\l\7\8\o\s\d\e\d\3\w\x\y\j\i\w\d\4\o\5\m\n\g\z\d\8\4\b\s\y\n\0\a\z\z\x\g\m\r\z\f\2\i\2\e\q\i\c\1\x\g\1\t\m\s\j\j\i\p\h\d\s\4\0\o\a\6\v\e\0\3\8\m\6\8\j\w\w\n\a\b\8\r\5\n\z\n\m\m\6\6\h\2\c\2\6\z\6\i\j\o\d\1\r\x\v\p\a\i\t\m\g\w\x\v\3\p\b\n\n\t\v\3\r\7\h\v\h\1\m\v\4\y\y\a\1\j\m\3\f\u\k\2\z\b\g\s\a\y\t\z\9\e\l\c\y\r\o\z\l\z\s\x\r\n\5\x\n\t\1\a\j\u\1\o\5\o\2\1\f\p\c\u\m\b\d\h\o ]] 00:08:41.327 18:17:39 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:41.327 18:17:39 -- dd/uring.sh@69 -- # [[ lk2k87slfvb6ax9gwx81xatgrpalxhb9l7skj9w2p3mws25buypb6rgo6fd56ka3l4jjts1qs2stw205qnptky21og8kamsgxlo202mhxide3sjnwv4ic2fsx64h249ap6uhiq7202gtghn3vf4v7iyk4d1vna6ol0a4bl1ai7fzhs3hj6c7vpwlt5zwobkv846cf3fdgtnb5ytggfhf8w5m3b7a4omnx0rvxt6eprsbzfij8o7om1a4qck0cd8q890u1qenfmeksvjafaui67sk8w4xqhlreq1sa6oqbupyquqld8j8dnprwpobi2l1y2qrpcaspazmm1fhya3ojc3mvdfgqyoh6mcmzsg531mvusq24mm6we7uar3k42av3vvv8ujgndj3ai69npl61io4gmhzqq7zwz8pp9me3tsdujjc5i23vjax3zqkhl0zd0wi9ldtkdv33zvo09l99th9lpcdepekqki0p17au747pgkfejnrzwm5n3q57f4uhmibvjqlfbvx0z3z6c3kw4xn8wfo5f981n2a3s0z6ewno7xfm43dame9zu3ulagg2ezifwkcrvvq5dsg8ghhhg22upmskdp513s9ggaogj7v8f6267m3f2daqbnj7qc3i7cus6q9gwif9h56gj12vjgalmtnpyg4abziiev60o0h1g4m3244h1icrvarn052iirdwoi61yvi1e71hvnew7ih210zds4hsl2y9fmtj3vbeu7qf6ig2zch92m8zhglzrcjq00dh9g7ll9ijtmskajiotuupesesu3z96u8njobxm2crkrd9vs8u0rlmzso7riaal78osded3wxyjiwd4o5mngzd84bsyn0azzxgmrzf2i2eqic1xg1tmsjjiphds40oa6ve038m68jwwnab8r5nznmm66h2c26z6ijod1rxvpaitmgwxv3pbnntv3r7hvh1mv4yya1jm3fuk2zbgsaytz9elcyrozlzsxrn5xnt1aju1o5o21fpcumbdho == \l\k\2\k\8\7\s\l\f\v\b\6\a\x\9\g\w\x\8\1\x\a\t\g\r\p\a\l\x\h\b\9\l\7\s\k\j\9\w\2\p\3\m\w\s\2\5\b\u\y\p\b\6\r\g\o\6\f\d\5\6\k\a\3\l\4\j\j\t\s\1\q\s\2\s\t\w\2\0\5\q\n\p\t\k\y\2\1\o\g\8\k\a\m\s\g\x\l\o\2\0\2\m\h\x\i\d\e\3\s\j\n\w\v\4\i\c\2\f\s\x\6\4\h\2\4\9\a\p\6\u\h\i\q\7\2\0\2\g\t\g\h\n\3\v\f\4\v\7\i\y\k\4\d\1\v\n\a\6\o\l\0\a\4\b\l\1\a\i\7\f\z\h\s\3\h\j\6\c\7\v\p\w\l\t\5\z\w\o\b\k\v\8\4\6\c\f\3\f\d\g\t\n\b\5\y\t\g\g\f\h\f\8\w\5\m\3\b\7\a\4\o\m\n\x\0\r\v\x\t\6\e\p\r\s\b\z\f\i\j\8\o\7\o\m\1\a\4\q\c\k\0\c\d\8\q\8\9\0\u\1\q\e\n\f\m\e\k\s\v\j\a\f\a\u\i\6\7\s\k\8\w\4\x\q\h\l\r\e\q\1\s\a\6\o\q\b\u\p\y\q\u\q\l\d\8\j\8\d\n\p\r\w\p\o\b\i\2\l\1\y\2\q\r\p\c\a\s\p\a\z\m\m\1\f\h\y\a\3\o\j\c\3\m\v\d\f\g\q\y\o\h\6\m\c\m\z\s\g\5\3\1\m\v\u\s\q\2\4\m\m\6\w\e\7\u\a\r\3\k\4\2\a\v\3\v\v\v\8\u\j\g\n\d\j\3\a\i\6\9\n\p\l\6\1\i\o\4\g\m\h\z\q\q\7\z\w\z\8\p\p\9\m\e\3\t\s\d\u\j\j\c\5\i\2\3\v\j\a\x\3\z\q\k\h\l\0\z\d\0\w\i\9\l\d\t\k\d\v\3\3\z\v\o\0\9\l\9\9\t\h\9\l\p\c\d\e\p\e\k\q\k\i\0\p\1\7\a\u\7\4\7\p\g\k\f\e\j\n\r\z\w\m\5\n\3\q\5\7\f\4\u\h\m\i\b\v\j\q\l\f\b\v\x\0\z\3\z\6\c\3\k\w\4\x\n\8\w\f\o\5\f\9\8\1\n\2\a\3\s\0\z\6\e\w\n\o\7\x\f\m\4\3\d\a\m\e\9\z\u\3\u\l\a\g\g\2\e\z\i\f\w\k\c\r\v\v\q\5\d\s\g\8\g\h\h\h\g\2\2\u\p\m\s\k\d\p\5\1\3\s\9\g\g\a\o\g\j\7\v\8\f\6\2\6\7\m\3\f\2\d\a\q\b\n\j\7\q\c\3\i\7\c\u\s\6\q\9\g\w\i\f\9\h\5\6\g\j\1\2\v\j\g\a\l\m\t\n\p\y\g\4\a\b\z\i\i\e\v\6\0\o\0\h\1\g\4\m\3\2\4\4\h\1\i\c\r\v\a\r\n\0\5\2\i\i\r\d\w\o\i\6\1\y\v\i\1\e\7\1\h\v\n\e\w\7\i\h\2\1\0\z\d\s\4\h\s\l\2\y\9\f\m\t\j\3\v\b\e\u\7\q\f\6\i\g\2\z\c\h\9\2\m\8\z\h\g\l\z\r\c\j\q\0\0\d\h\9\g\7\l\l\9\i\j\t\m\s\k\a\j\i\o\t\u\u\p\e\s\e\s\u\3\z\9\6\u\8\n\j\o\b\x\m\2\c\r\k\r\d\9\v\s\8\u\0\r\l\m\z\s\o\7\r\i\a\a\l\7\8\o\s\d\e\d\3\w\x\y\j\i\w\d\4\o\5\m\n\g\z\d\8\4\b\s\y\n\0\a\z\z\x\g\m\r\z\f\2\i\2\e\q\i\c\1\x\g\1\t\m\s\j\j\i\p\h\d\s\4\0\o\a\6\v\e\0\3\8\m\6\8\j\w\w\n\a\b\8\r\5\n\z\n\m\m\6\6\h\2\c\2\6\z\6\i\j\o\d\1\r\x\v\p\a\i\t\m\g\w\x\v\3\p\b\n\n\t\v\3\r\7\h\v\h\1\m\v\4\y\y\a\1\j\m\3\f\u\k\2\z\b\g\s\a\y\t\z\9\e\l\c\y\r\o\z\l\z\s\x\r\n\5\x\n\t\1\a\j\u\1\o\5\o\2\1\f\p\c\u\m\b\d\h\o ]] 00:08:41.327 18:17:39 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:41.585 18:17:39 -- dd/uring.sh@75 -- # gen_conf 00:08:41.585 18:17:39 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:41.585 18:17:39 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.585 18:17:39 -- common/autotest_common.sh@10 -- # set +x 00:08:41.585 [2024-11-17 18:17:39.841642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:41.585 [2024-11-17 18:17:39.841742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70839 ] 00:08:41.844 { 00:08:41.844 "subsystems": [ 00:08:41.844 { 00:08:41.844 "subsystem": "bdev", 00:08:41.844 "config": [ 00:08:41.844 { 00:08:41.844 "params": { 00:08:41.844 "block_size": 512, 00:08:41.844 "num_blocks": 1048576, 00:08:41.844 "name": "malloc0" 00:08:41.844 }, 00:08:41.844 "method": "bdev_malloc_create" 00:08:41.844 }, 00:08:41.844 { 00:08:41.844 "params": { 00:08:41.844 "filename": "/dev/zram1", 00:08:41.844 "name": "uring0" 00:08:41.844 }, 00:08:41.844 "method": "bdev_uring_create" 00:08:41.844 }, 00:08:41.844 { 00:08:41.844 "method": "bdev_wait_for_examine" 00:08:41.844 } 00:08:41.844 ] 00:08:41.844 } 00:08:41.844 ] 00:08:41.844 } 00:08:41.844 [2024-11-17 18:17:39.978860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.844 [2024-11-17 18:17:40.011649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.221  [2024-11-17T18:17:42.424Z] Copying: 177/512 [MB] (177 MBps) [2024-11-17T18:17:43.361Z] Copying: 353/512 [MB] (176 MBps) [2024-11-17T18:17:43.361Z] Copying: 512/512 [MB] (average 176 MBps) 00:08:45.094 00:08:45.094 18:17:43 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:45.094 18:17:43 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:45.094 18:17:43 -- dd/uring.sh@87 -- # : 00:08:45.094 18:17:43 -- dd/uring.sh@87 -- # : 00:08:45.094 18:17:43 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:45.094 18:17:43 -- dd/uring.sh@87 -- # gen_conf 00:08:45.094 18:17:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.094 18:17:43 -- common/autotest_common.sh@10 -- # set +x 00:08:45.094 [2024-11-17 18:17:43.306966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:45.094 [2024-11-17 18:17:43.307072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70884 ] 00:08:45.094 { 00:08:45.094 "subsystems": [ 00:08:45.094 { 00:08:45.094 "subsystem": "bdev", 00:08:45.094 "config": [ 00:08:45.094 { 00:08:45.094 "params": { 00:08:45.094 "block_size": 512, 00:08:45.094 "num_blocks": 1048576, 00:08:45.094 "name": "malloc0" 00:08:45.094 }, 00:08:45.094 "method": "bdev_malloc_create" 00:08:45.094 }, 00:08:45.094 { 00:08:45.094 "params": { 00:08:45.094 "filename": "/dev/zram1", 00:08:45.094 "name": "uring0" 00:08:45.094 }, 00:08:45.094 "method": "bdev_uring_create" 00:08:45.094 }, 00:08:45.094 { 00:08:45.094 "params": { 00:08:45.094 "name": "uring0" 00:08:45.094 }, 00:08:45.094 "method": "bdev_uring_delete" 00:08:45.094 }, 00:08:45.094 { 00:08:45.094 "method": "bdev_wait_for_examine" 00:08:45.094 } 00:08:45.094 ] 00:08:45.094 } 00:08:45.094 ] 00:08:45.094 } 00:08:45.353 [2024-11-17 18:17:43.436199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.353 [2024-11-17 18:17:43.468772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.353  [2024-11-17T18:17:43.880Z] Copying: 0/0 [B] (average 0 Bps) 00:08:45.613 00:08:45.613 18:17:43 -- dd/uring.sh@94 -- # : 00:08:45.613 18:17:43 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:45.613 18:17:43 -- dd/uring.sh@94 -- # gen_conf 00:08:45.613 18:17:43 -- common/autotest_common.sh@650 -- # local es=0 00:08:45.613 18:17:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.613 18:17:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:45.613 18:17:43 -- common/autotest_common.sh@10 -- # set +x 00:08:45.613 18:17:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.613 18:17:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.613 18:17:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.613 18:17:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.613 18:17:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.613 18:17:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.613 18:17:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.613 18:17:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:45.613 18:17:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:45.872 [2024-11-17 18:17:43.888747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:45.872 [2024-11-17 18:17:43.888838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:08:45.872 { 00:08:45.872 "subsystems": [ 00:08:45.872 { 00:08:45.872 "subsystem": "bdev", 00:08:45.872 "config": [ 00:08:45.872 { 00:08:45.872 "params": { 00:08:45.872 "block_size": 512, 00:08:45.872 "num_blocks": 1048576, 00:08:45.872 "name": "malloc0" 00:08:45.872 }, 00:08:45.872 "method": "bdev_malloc_create" 00:08:45.872 }, 00:08:45.872 { 00:08:45.872 "params": { 00:08:45.872 "filename": "/dev/zram1", 00:08:45.872 "name": "uring0" 00:08:45.872 }, 00:08:45.872 "method": "bdev_uring_create" 00:08:45.872 }, 00:08:45.872 { 00:08:45.872 "params": { 00:08:45.872 "name": "uring0" 00:08:45.872 }, 00:08:45.872 "method": "bdev_uring_delete" 00:08:45.872 }, 00:08:45.872 { 00:08:45.872 "method": "bdev_wait_for_examine" 00:08:45.872 } 00:08:45.872 ] 00:08:45.872 } 00:08:45.872 ] 00:08:45.872 } 00:08:45.872 [2024-11-17 18:17:44.026248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.872 [2024-11-17 18:17:44.056126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.131 [2024-11-17 18:17:44.201930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:46.131 [2024-11-17 18:17:44.201991] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:46.131 [2024-11-17 18:17:44.202001] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:46.131 [2024-11-17 18:17:44.202010] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.131 [2024-11-17 18:17:44.357149] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:46.390 18:17:44 -- common/autotest_common.sh@653 -- # es=237 00:08:46.390 18:17:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.390 18:17:44 -- common/autotest_common.sh@662 -- # es=109 00:08:46.390 18:17:44 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:46.390 18:17:44 -- common/autotest_common.sh@670 -- # es=1 00:08:46.390 18:17:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.390 18:17:44 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:46.390 18:17:44 -- dd/common.sh@172 -- # local id=1 00:08:46.390 18:17:44 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:46.390 18:17:44 -- dd/common.sh@176 -- # echo 1 00:08:46.390 18:17:44 -- dd/common.sh@177 -- # echo 1 00:08:46.390 18:17:44 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:46.390 00:08:46.390 real 0m12.818s 00:08:46.390 user 0m7.208s 00:08:46.390 sys 0m4.959s 00:08:46.390 18:17:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.390 18:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.390 ************************************ 00:08:46.390 END TEST dd_uring_copy 00:08:46.390 ************************************ 00:08:46.649 00:08:46.649 real 0m13.049s 00:08:46.649 user 0m7.342s 00:08:46.649 sys 0m5.063s 00:08:46.649 18:17:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.649 18:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.649 ************************************ 00:08:46.649 END TEST spdk_dd_uring 00:08:46.649 ************************************ 00:08:46.649 18:17:44 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:46.649 18:17:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.649 18:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.649 18:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.649 ************************************ 00:08:46.649 START TEST spdk_dd_sparse 00:08:46.649 ************************************ 00:08:46.649 18:17:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:46.649 * Looking for test storage... 00:08:46.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:46.649 18:17:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:46.649 18:17:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:46.649 18:17:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:46.909 18:17:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:46.909 18:17:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:46.909 18:17:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:46.909 18:17:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:46.909 18:17:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:46.909 18:17:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:46.909 18:17:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.909 18:17:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:46.909 18:17:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:46.909 18:17:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:46.909 18:17:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:46.909 18:17:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:46.909 18:17:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:46.909 18:17:44 -- scripts/common.sh@344 -- # : 1 00:08:46.909 18:17:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:46.909 18:17:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.909 18:17:44 -- scripts/common.sh@364 -- # decimal 1 00:08:46.909 18:17:44 -- scripts/common.sh@352 -- # local d=1 00:08:46.909 18:17:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.909 18:17:44 -- scripts/common.sh@354 -- # echo 1 00:08:46.909 18:17:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:46.909 18:17:44 -- scripts/common.sh@365 -- # decimal 2 00:08:46.909 18:17:44 -- scripts/common.sh@352 -- # local d=2 00:08:46.909 18:17:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.909 18:17:44 -- scripts/common.sh@354 -- # echo 2 00:08:46.909 18:17:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:46.909 18:17:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:46.909 18:17:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:46.909 18:17:44 -- scripts/common.sh@367 -- # return 0 00:08:46.909 18:17:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.909 18:17:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.909 --rc genhtml_branch_coverage=1 00:08:46.909 --rc genhtml_function_coverage=1 00:08:46.909 --rc genhtml_legend=1 00:08:46.909 --rc geninfo_all_blocks=1 00:08:46.909 --rc geninfo_unexecuted_blocks=1 00:08:46.909 00:08:46.909 ' 00:08:46.909 18:17:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.909 --rc genhtml_branch_coverage=1 00:08:46.909 --rc genhtml_function_coverage=1 00:08:46.909 --rc genhtml_legend=1 00:08:46.909 --rc geninfo_all_blocks=1 00:08:46.909 --rc geninfo_unexecuted_blocks=1 00:08:46.909 00:08:46.909 ' 00:08:46.909 18:17:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.909 --rc genhtml_branch_coverage=1 00:08:46.909 --rc genhtml_function_coverage=1 00:08:46.909 --rc genhtml_legend=1 00:08:46.909 --rc geninfo_all_blocks=1 00:08:46.909 --rc geninfo_unexecuted_blocks=1 00:08:46.909 00:08:46.909 ' 00:08:46.909 18:17:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.909 --rc genhtml_branch_coverage=1 00:08:46.909 --rc genhtml_function_coverage=1 00:08:46.909 --rc genhtml_legend=1 00:08:46.909 --rc geninfo_all_blocks=1 00:08:46.909 --rc geninfo_unexecuted_blocks=1 00:08:46.909 00:08:46.909 ' 00:08:46.909 18:17:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.909 18:17:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.909 18:17:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.909 18:17:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.909 18:17:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.910 18:17:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.910 18:17:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.910 18:17:44 -- paths/export.sh@5 -- # export PATH 00:08:46.910 18:17:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.910 18:17:44 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:46.910 18:17:44 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:46.910 18:17:44 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:46.910 18:17:44 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:46.910 18:17:44 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:46.910 18:17:44 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:46.910 18:17:44 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:46.910 18:17:44 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:46.910 18:17:44 -- dd/sparse.sh@118 -- # prepare 00:08:46.910 18:17:44 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:46.910 18:17:44 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:46.910 1+0 records in 00:08:46.910 1+0 records out 00:08:46.910 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00548584 s, 765 MB/s 00:08:46.910 18:17:44 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:46.910 1+0 records in 00:08:46.910 1+0 records out 00:08:46.910 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00637587 s, 658 MB/s 00:08:46.910 18:17:44 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:46.910 1+0 records in 00:08:46.910 1+0 records out 00:08:46.910 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00532395 s, 788 MB/s 00:08:46.910 18:17:44 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:46.910 18:17:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.910 18:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.910 18:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.910 ************************************ 00:08:46.910 START TEST dd_sparse_file_to_file 00:08:46.910 ************************************ 00:08:46.910 18:17:44 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:46.910 18:17:44 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:46.910 18:17:44 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:46.910 18:17:44 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:46.910 18:17:44 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:46.910 18:17:44 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:46.910 18:17:44 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:46.910 18:17:44 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:46.910 18:17:44 -- dd/sparse.sh@41 -- # gen_conf 00:08:46.910 18:17:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.910 18:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.910 [2024-11-17 18:17:45.030142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:46.910 [2024-11-17 18:17:45.030988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71005 ] 00:08:46.910 { 00:08:46.910 "subsystems": [ 00:08:46.910 { 00:08:46.910 "subsystem": "bdev", 00:08:46.910 "config": [ 00:08:46.910 { 00:08:46.910 "params": { 00:08:46.910 "block_size": 4096, 00:08:46.910 "filename": "dd_sparse_aio_disk", 00:08:46.910 "name": "dd_aio" 00:08:46.910 }, 00:08:46.910 "method": "bdev_aio_create" 00:08:46.910 }, 00:08:46.910 { 00:08:46.910 "params": { 00:08:46.910 "lvs_name": "dd_lvstore", 00:08:46.910 "bdev_name": "dd_aio" 00:08:46.910 }, 00:08:46.910 "method": "bdev_lvol_create_lvstore" 00:08:46.910 }, 00:08:46.910 { 00:08:46.910 "method": "bdev_wait_for_examine" 00:08:46.910 } 00:08:46.910 ] 00:08:46.910 } 00:08:46.910 ] 00:08:46.910 } 00:08:46.910 [2024-11-17 18:17:45.167149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.169 [2024-11-17 18:17:45.198873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.169  [2024-11-17T18:17:45.695Z] Copying: 12/36 [MB] (average 1714 MBps) 00:08:47.428 00:08:47.428 18:17:45 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:47.428 18:17:45 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:47.428 18:17:45 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:47.428 18:17:45 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:47.428 18:17:45 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:47.428 18:17:45 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:47.428 18:17:45 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:47.428 18:17:45 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:47.428 18:17:45 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:47.428 18:17:45 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:47.428 00:08:47.428 real 0m0.492s 00:08:47.428 user 0m0.261s 00:08:47.428 sys 0m0.137s 00:08:47.428 18:17:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.428 18:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.428 ************************************ 00:08:47.428 END TEST dd_sparse_file_to_file 00:08:47.428 ************************************ 00:08:47.428 18:17:45 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:47.428 18:17:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.428 18:17:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.428 18:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.428 ************************************ 00:08:47.428 START TEST dd_sparse_file_to_bdev 00:08:47.428 ************************************ 00:08:47.428 18:17:45 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:47.428 18:17:45 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:47.428 18:17:45 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:47.428 18:17:45 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:47.428 18:17:45 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:47.428 18:17:45 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:47.428 18:17:45 -- dd/sparse.sh@73 -- # gen_conf 00:08:47.428 18:17:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:47.428 18:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.428 [2024-11-17 18:17:45.569249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:47.428 [2024-11-17 18:17:45.569761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:08:47.428 { 00:08:47.428 "subsystems": [ 00:08:47.428 { 00:08:47.428 "subsystem": "bdev", 00:08:47.428 "config": [ 00:08:47.428 { 00:08:47.428 "params": { 00:08:47.428 "block_size": 4096, 00:08:47.428 "filename": "dd_sparse_aio_disk", 00:08:47.428 "name": "dd_aio" 00:08:47.428 }, 00:08:47.428 "method": "bdev_aio_create" 00:08:47.428 }, 00:08:47.428 { 00:08:47.428 "params": { 00:08:47.428 "lvs_name": "dd_lvstore", 00:08:47.428 "lvol_name": "dd_lvol", 00:08:47.428 "size": 37748736, 00:08:47.428 "thin_provision": true 00:08:47.428 }, 00:08:47.428 "method": "bdev_lvol_create" 00:08:47.428 }, 00:08:47.428 { 00:08:47.428 "method": "bdev_wait_for_examine" 00:08:47.428 } 00:08:47.428 ] 00:08:47.428 } 00:08:47.428 ] 00:08:47.428 } 00:08:47.687 [2024-11-17 18:17:45.706893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.687 [2024-11-17 18:17:45.737155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.687 [2024-11-17 18:17:45.791863] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:47.687  [2024-11-17T18:17:45.954Z] Copying: 12/36 [MB] (average 521 MBps)[2024-11-17 18:17:45.830315] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:47.946 00:08:47.946 00:08:47.946 00:08:47.946 real 0m0.468s 00:08:47.946 user 0m0.273s 00:08:47.946 sys 0m0.123s 00:08:47.946 18:17:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.946 18:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.946 ************************************ 00:08:47.946 END TEST dd_sparse_file_to_bdev 00:08:47.946 ************************************ 00:08:47.946 18:17:46 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:47.946 18:17:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.946 18:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.947 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:47.947 ************************************ 00:08:47.947 START TEST dd_sparse_bdev_to_file 00:08:47.947 ************************************ 00:08:47.947 18:17:46 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:47.947 18:17:46 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:47.947 18:17:46 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:47.947 18:17:46 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:47.947 18:17:46 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:47.947 18:17:46 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:47.947 18:17:46 -- dd/sparse.sh@91 -- # gen_conf 00:08:47.947 18:17:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:47.947 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:47.947 [2024-11-17 18:17:46.082711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:47.947 [2024-11-17 18:17:46.082816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71077 ] 00:08:47.947 { 00:08:47.947 "subsystems": [ 00:08:47.947 { 00:08:47.947 "subsystem": "bdev", 00:08:47.947 "config": [ 00:08:47.947 { 00:08:47.947 "params": { 00:08:47.947 "block_size": 4096, 00:08:47.947 "filename": "dd_sparse_aio_disk", 00:08:47.947 "name": "dd_aio" 00:08:47.947 }, 00:08:47.947 "method": "bdev_aio_create" 00:08:47.947 }, 00:08:47.947 { 00:08:47.947 "method": "bdev_wait_for_examine" 00:08:47.947 } 00:08:47.947 ] 00:08:47.947 } 00:08:47.947 ] 00:08:47.947 } 00:08:47.947 [2024-11-17 18:17:46.209744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.206 [2024-11-17 18:17:46.242791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.206  [2024-11-17T18:17:46.473Z] Copying: 12/36 [MB] (average 1333 MBps) 00:08:48.206 00:08:48.465 18:17:46 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:48.465 18:17:46 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:48.465 18:17:46 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:48.465 18:17:46 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:48.465 18:17:46 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:48.465 18:17:46 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:48.465 18:17:46 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:48.465 18:17:46 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:48.465 18:17:46 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:48.465 18:17:46 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:48.465 00:08:48.465 real 0m0.451s 00:08:48.465 user 0m0.248s 00:08:48.465 sys 0m0.130s 00:08:48.465 ************************************ 00:08:48.465 END TEST dd_sparse_bdev_to_file 00:08:48.465 ************************************ 00:08:48.465 18:17:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.465 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.465 18:17:46 -- dd/sparse.sh@1 -- # cleanup 00:08:48.465 18:17:46 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:48.465 18:17:46 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:48.465 18:17:46 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:48.465 18:17:46 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:48.465 ************************************ 00:08:48.465 END TEST spdk_dd_sparse 00:08:48.465 ************************************ 00:08:48.465 00:08:48.465 real 0m1.808s 00:08:48.465 user 0m0.949s 00:08:48.465 sys 0m0.609s 00:08:48.465 18:17:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.465 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.465 18:17:46 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:48.465 18:17:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.465 18:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.465 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.465 ************************************ 00:08:48.465 START TEST spdk_dd_negative 00:08:48.465 ************************************ 00:08:48.465 18:17:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:48.465 * Looking for test storage... 00:08:48.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:48.465 18:17:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:48.465 18:17:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:48.465 18:17:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:48.725 18:17:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:48.725 18:17:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:48.725 18:17:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:48.725 18:17:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:48.725 18:17:46 -- scripts/common.sh@335 -- # IFS=.-: 00:08:48.725 18:17:46 -- scripts/common.sh@335 -- # read -ra ver1 00:08:48.725 18:17:46 -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.725 18:17:46 -- scripts/common.sh@336 -- # read -ra ver2 00:08:48.725 18:17:46 -- scripts/common.sh@337 -- # local 'op=<' 00:08:48.725 18:17:46 -- scripts/common.sh@339 -- # ver1_l=2 00:08:48.725 18:17:46 -- scripts/common.sh@340 -- # ver2_l=1 00:08:48.725 18:17:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:48.725 18:17:46 -- scripts/common.sh@343 -- # case "$op" in 00:08:48.725 18:17:46 -- scripts/common.sh@344 -- # : 1 00:08:48.725 18:17:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:48.725 18:17:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.725 18:17:46 -- scripts/common.sh@364 -- # decimal 1 00:08:48.725 18:17:46 -- scripts/common.sh@352 -- # local d=1 00:08:48.725 18:17:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.725 18:17:46 -- scripts/common.sh@354 -- # echo 1 00:08:48.725 18:17:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:48.725 18:17:46 -- scripts/common.sh@365 -- # decimal 2 00:08:48.725 18:17:46 -- scripts/common.sh@352 -- # local d=2 00:08:48.725 18:17:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.725 18:17:46 -- scripts/common.sh@354 -- # echo 2 00:08:48.725 18:17:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:48.725 18:17:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:48.725 18:17:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:48.725 18:17:46 -- scripts/common.sh@367 -- # return 0 00:08:48.725 18:17:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.725 18:17:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:48.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.725 --rc genhtml_branch_coverage=1 00:08:48.725 --rc genhtml_function_coverage=1 00:08:48.725 --rc genhtml_legend=1 00:08:48.725 --rc geninfo_all_blocks=1 00:08:48.725 --rc geninfo_unexecuted_blocks=1 00:08:48.725 00:08:48.725 ' 00:08:48.725 18:17:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:48.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.725 --rc genhtml_branch_coverage=1 00:08:48.725 --rc genhtml_function_coverage=1 00:08:48.725 --rc genhtml_legend=1 00:08:48.725 --rc geninfo_all_blocks=1 00:08:48.725 --rc geninfo_unexecuted_blocks=1 00:08:48.725 00:08:48.725 ' 00:08:48.725 18:17:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:48.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.725 --rc genhtml_branch_coverage=1 00:08:48.725 --rc genhtml_function_coverage=1 00:08:48.725 --rc genhtml_legend=1 00:08:48.725 --rc geninfo_all_blocks=1 00:08:48.725 --rc geninfo_unexecuted_blocks=1 00:08:48.725 00:08:48.725 ' 00:08:48.725 18:17:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:48.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.725 --rc genhtml_branch_coverage=1 00:08:48.725 --rc genhtml_function_coverage=1 00:08:48.725 --rc genhtml_legend=1 00:08:48.725 --rc geninfo_all_blocks=1 00:08:48.725 --rc geninfo_unexecuted_blocks=1 00:08:48.725 00:08:48.725 ' 00:08:48.725 18:17:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.725 18:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.725 18:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.725 18:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.725 18:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.725 18:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.725 18:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.725 18:17:46 -- paths/export.sh@5 -- # export PATH 00:08:48.725 18:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.725 18:17:46 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.725 18:17:46 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.725 18:17:46 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.725 18:17:46 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.725 18:17:46 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:48.725 18:17:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.725 18:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.725 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.725 ************************************ 00:08:48.725 START TEST dd_invalid_arguments 00:08:48.725 ************************************ 00:08:48.725 18:17:46 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:08:48.725 18:17:46 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:48.725 18:17:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:48.725 18:17:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:48.726 18:17:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.726 18:17:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.726 18:17:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.726 18:17:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:48.726 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:48.726 options: 00:08:48.726 -c, --config JSON config file (default none) 00:08:48.726 --json JSON config file (default none) 00:08:48.726 --json-ignore-init-errors 00:08:48.726 don't exit on invalid config entry 00:08:48.726 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:48.726 -g, --single-file-segments 00:08:48.726 force creating just one hugetlbfs file 00:08:48.726 -h, --help show this usage 00:08:48.726 -i, --shm-id shared memory ID (optional) 00:08:48.726 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:48.726 --lcores lcore to CPU mapping list. The list is in the format: 00:08:48.726 [<,lcores[@CPUs]>...] 00:08:48.726 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:48.726 Within the group, '-' is used for range separator, 00:08:48.726 ',' is used for single number separator. 00:08:48.726 '( )' can be omitted for single element group, 00:08:48.726 '@' can be omitted if cpus and lcores have the same value 00:08:48.726 -n, --mem-channels channel number of memory channels used for DPDK 00:08:48.726 -p, --main-core main (primary) core for DPDK 00:08:48.726 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:48.726 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:48.726 --disable-cpumask-locks Disable CPU core lock files. 00:08:48.726 --silence-noticelog disable notice level logging to stderr 00:08:48.726 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:48.726 -u, --no-pci disable PCI access 00:08:48.726 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:48.726 --max-delay maximum reactor delay (in microseconds) 00:08:48.726 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:48.726 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:48.726 -R, --huge-unlink unlink huge files after initialization 00:08:48.726 -v, --version print SPDK version 00:08:48.726 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:48.726 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:48.726 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:48.726 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:48.726 Tracepoints vary in size and can use more than one trace entry. 00:08:48.726 --rpcs-allowed comma-separated list of permitted RPCS 00:08:48.726 --env-context Opaque context for use of the env implementation 00:08:48.726 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:48.726 --no-huge run without using hugepages 00:08:48.726 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:48.726 -e, --tpoint-group [:] 00:08:48.726 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:48.726 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:48.726 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:48.726 [2024-11-17 18:17:46.861730] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:48.726 can be combined (e.g. thread,bdev:0x1). 00:08:48.726 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:48.726 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:48.726 [--------- DD Options ---------] 00:08:48.726 --if Input file. Must specify either --if or --ib. 00:08:48.726 --ib Input bdev. Must specifier either --if or --ib 00:08:48.726 --of Output file. Must specify either --of or --ob. 00:08:48.726 --ob Output bdev. Must specify either --of or --ob. 00:08:48.726 --iflag Input file flags. 00:08:48.726 --oflag Output file flags. 00:08:48.726 --bs I/O unit size (default: 4096) 00:08:48.726 --qd Queue depth (default: 2) 00:08:48.726 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:48.726 --skip Skip this many I/O units at start of input. (default: 0) 00:08:48.726 --seek Skip this many I/O units at start of output. (default: 0) 00:08:48.726 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:48.726 --sparse Enable hole skipping in input target 00:08:48.726 Available iflag and oflag values: 00:08:48.726 append - append mode 00:08:48.726 direct - use direct I/O for data 00:08:48.726 directory - fail unless a directory 00:08:48.726 dsync - use synchronized I/O for data 00:08:48.726 noatime - do not update access time 00:08:48.726 noctty - do not assign controlling terminal from file 00:08:48.726 nofollow - do not follow symlinks 00:08:48.726 nonblock - use non-blocking I/O 00:08:48.726 sync - use synchronized I/O for data and metadata 00:08:48.726 18:17:46 -- common/autotest_common.sh@653 -- # es=2 00:08:48.726 18:17:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.726 18:17:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.726 18:17:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.726 00:08:48.726 real 0m0.066s 00:08:48.726 user 0m0.037s 00:08:48.726 sys 0m0.027s 00:08:48.726 ************************************ 00:08:48.726 END TEST dd_invalid_arguments 00:08:48.726 ************************************ 00:08:48.726 18:17:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.726 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.726 18:17:46 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:48.726 18:17:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.726 18:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.726 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.726 ************************************ 00:08:48.726 START TEST dd_double_input 00:08:48.726 ************************************ 00:08:48.726 18:17:46 -- common/autotest_common.sh@1114 -- # double_input 00:08:48.726 18:17:46 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.726 18:17:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:48.726 18:17:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.726 18:17:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.726 18:17:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.726 18:17:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.726 18:17:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.726 18:17:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.726 [2024-11-17 18:17:46.980693] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:48.986 18:17:46 -- common/autotest_common.sh@653 -- # es=22 00:08:48.986 18:17:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.986 18:17:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.986 ************************************ 00:08:48.986 END TEST dd_double_input 00:08:48.986 ************************************ 00:08:48.986 18:17:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.986 00:08:48.986 real 0m0.062s 00:08:48.986 user 0m0.036s 00:08:48.986 sys 0m0.025s 00:08:48.986 18:17:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.986 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.986 18:17:47 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:48.986 18:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.986 18:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.986 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.986 ************************************ 00:08:48.986 START TEST dd_double_output 00:08:48.986 ************************************ 00:08:48.986 18:17:47 -- common/autotest_common.sh@1114 -- # double_output 00:08:48.986 18:17:47 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.986 18:17:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:48.986 18:17:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.986 18:17:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.986 18:17:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.986 18:17:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.986 18:17:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.986 [2024-11-17 18:17:47.098931] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:48.986 18:17:47 -- common/autotest_common.sh@653 -- # es=22 00:08:48.986 18:17:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.986 18:17:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.986 18:17:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.986 00:08:48.986 real 0m0.062s 00:08:48.986 user 0m0.034s 00:08:48.986 sys 0m0.028s 00:08:48.986 18:17:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.986 ************************************ 00:08:48.986 END TEST dd_double_output 00:08:48.986 ************************************ 00:08:48.986 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.986 18:17:47 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:48.986 18:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.986 18:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.986 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.986 ************************************ 00:08:48.986 START TEST dd_no_input 00:08:48.986 ************************************ 00:08:48.986 18:17:47 -- common/autotest_common.sh@1114 -- # no_input 00:08:48.986 18:17:47 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.986 18:17:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:48.986 18:17:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.986 18:17:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.986 18:17:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.986 18:17:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.986 18:17:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.986 18:17:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.986 [2024-11-17 18:17:47.215477] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:48.986 18:17:47 -- common/autotest_common.sh@653 -- # es=22 00:08:48.986 18:17:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.986 18:17:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.986 ************************************ 00:08:48.986 END TEST dd_no_input 00:08:48.986 ************************************ 00:08:48.986 18:17:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.986 00:08:48.986 real 0m0.064s 00:08:48.986 user 0m0.035s 00:08:48.986 sys 0m0.028s 00:08:48.986 18:17:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.986 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 18:17:47 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:49.245 18:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.245 18:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.245 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 ************************************ 00:08:49.245 START TEST dd_no_output 00:08:49.245 ************************************ 00:08:49.245 18:17:47 -- common/autotest_common.sh@1114 -- # no_output 00:08:49.245 18:17:47 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:49.245 18:17:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:49.245 18:17:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:49.245 18:17:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.245 18:17:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.245 18:17:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.245 18:17:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:49.245 [2024-11-17 18:17:47.333828] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:49.245 18:17:47 -- common/autotest_common.sh@653 -- # es=22 00:08:49.245 18:17:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.245 ************************************ 00:08:49.245 END TEST dd_no_output 00:08:49.245 ************************************ 00:08:49.245 18:17:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.245 18:17:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.245 00:08:49.245 real 0m0.066s 00:08:49.245 user 0m0.045s 00:08:49.245 sys 0m0.021s 00:08:49.245 18:17:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.245 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 18:17:47 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:49.245 18:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.245 18:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.245 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.245 ************************************ 00:08:49.245 START TEST dd_wrong_blocksize 00:08:49.245 ************************************ 00:08:49.245 18:17:47 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:08:49.245 18:17:47 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:49.245 18:17:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:49.245 18:17:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:49.245 18:17:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.245 18:17:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.245 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.245 18:17:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.246 18:17:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.246 18:17:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:49.246 [2024-11-17 18:17:47.457078] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:49.246 18:17:47 -- common/autotest_common.sh@653 -- # es=22 00:08:49.246 18:17:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.246 18:17:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.246 18:17:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.246 00:08:49.246 real 0m0.066s 00:08:49.246 user 0m0.045s 00:08:49.246 sys 0m0.020s 00:08:49.246 18:17:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.246 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.246 ************************************ 00:08:49.246 END TEST dd_wrong_blocksize 00:08:49.246 ************************************ 00:08:49.505 18:17:47 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:49.505 18:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.505 18:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.505 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.505 ************************************ 00:08:49.505 START TEST dd_smaller_blocksize 00:08:49.505 ************************************ 00:08:49.505 18:17:47 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:08:49.505 18:17:47 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:49.505 18:17:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:49.505 18:17:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:49.505 18:17:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.505 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.505 18:17:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.505 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.505 18:17:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.505 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.505 18:17:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.505 18:17:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.505 18:17:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:49.505 [2024-11-17 18:17:47.570918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:49.505 [2024-11-17 18:17:47.571148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71301 ] 00:08:49.505 [2024-11-17 18:17:47.709999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.505 [2024-11-17 18:17:47.749362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.764 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:49.764 [2024-11-17 18:17:47.799957] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:49.764 [2024-11-17 18:17:47.799998] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.764 [2024-11-17 18:17:47.864203] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:49.764 18:17:47 -- common/autotest_common.sh@653 -- # es=244 00:08:49.764 18:17:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.764 18:17:47 -- common/autotest_common.sh@662 -- # es=116 00:08:49.764 18:17:47 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:49.764 18:17:47 -- common/autotest_common.sh@670 -- # es=1 00:08:49.764 18:17:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.764 00:08:49.764 real 0m0.413s 00:08:49.764 user 0m0.204s 00:08:49.764 sys 0m0.104s 00:08:49.764 18:17:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.764 ************************************ 00:08:49.764 END TEST dd_smaller_blocksize 00:08:49.764 ************************************ 00:08:49.764 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.764 18:17:47 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:49.764 18:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.764 18:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.764 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.764 ************************************ 00:08:49.764 START TEST dd_invalid_count 00:08:49.764 ************************************ 00:08:49.764 18:17:47 -- common/autotest_common.sh@1114 -- # invalid_count 00:08:49.764 18:17:47 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:49.764 18:17:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:49.764 18:17:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:49.764 18:17:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.764 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.764 18:17:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.764 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.764 18:17:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.764 18:17:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.764 18:17:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.764 18:17:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.764 18:17:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:50.023 [2024-11-17 18:17:48.035647] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:50.023 18:17:48 -- common/autotest_common.sh@653 -- # es=22 00:08:50.023 18:17:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.023 ************************************ 00:08:50.023 END TEST dd_invalid_count 00:08:50.024 ************************************ 00:08:50.024 18:17:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.024 18:17:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.024 00:08:50.024 real 0m0.064s 00:08:50.024 user 0m0.040s 00:08:50.024 sys 0m0.024s 00:08:50.024 18:17:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.024 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 18:17:48 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:50.024 18:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.024 18:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.024 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 ************************************ 00:08:50.024 START TEST dd_invalid_oflag 00:08:50.024 ************************************ 00:08:50.024 18:17:48 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:08:50.024 18:17:48 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:50.024 18:17:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:50.024 18:17:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:50.024 18:17:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.024 18:17:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.024 18:17:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.024 18:17:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:50.024 [2024-11-17 18:17:48.154950] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:50.024 18:17:48 -- common/autotest_common.sh@653 -- # es=22 00:08:50.024 18:17:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.024 18:17:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.024 18:17:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.024 00:08:50.024 real 0m0.067s 00:08:50.024 user 0m0.045s 00:08:50.024 sys 0m0.022s 00:08:50.024 18:17:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.024 ************************************ 00:08:50.024 END TEST dd_invalid_oflag 00:08:50.024 ************************************ 00:08:50.024 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 18:17:48 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:50.024 18:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.024 18:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.024 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 ************************************ 00:08:50.024 START TEST dd_invalid_iflag 00:08:50.024 ************************************ 00:08:50.024 18:17:48 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:08:50.024 18:17:48 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:50.024 18:17:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:50.024 18:17:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:50.024 18:17:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.024 18:17:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.024 18:17:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.024 18:17:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.024 18:17:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:50.024 [2024-11-17 18:17:48.270565] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:50.024 18:17:48 -- common/autotest_common.sh@653 -- # es=22 00:08:50.024 18:17:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.024 18:17:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.024 18:17:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.024 00:08:50.024 real 0m0.064s 00:08:50.024 user 0m0.040s 00:08:50.024 sys 0m0.024s 00:08:50.024 18:17:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.024 ************************************ 00:08:50.024 END TEST dd_invalid_iflag 00:08:50.024 ************************************ 00:08:50.024 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.300 18:17:48 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:50.300 18:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.300 18:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.300 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.300 ************************************ 00:08:50.300 START TEST dd_unknown_flag 00:08:50.300 ************************************ 00:08:50.300 18:17:48 -- common/autotest_common.sh@1114 -- # unknown_flag 00:08:50.300 18:17:48 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:50.300 18:17:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:50.300 18:17:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:50.300 18:17:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.300 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.300 18:17:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.300 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.300 18:17:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.300 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.300 18:17:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.300 18:17:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.300 18:17:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:50.300 [2024-11-17 18:17:48.386706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:50.300 [2024-11-17 18:17:48.386802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71387 ] 00:08:50.300 [2024-11-17 18:17:48.525803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.584 [2024-11-17 18:17:48.567439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.584 [2024-11-17 18:17:48.618263] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:50.584 [2024-11-17 18:17:48.618362] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:50.584 [2024-11-17 18:17:48.618377] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:50.584 [2024-11-17 18:17:48.618390] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.584 [2024-11-17 18:17:48.681915] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:50.584 18:17:48 -- common/autotest_common.sh@653 -- # es=236 00:08:50.584 18:17:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.584 18:17:48 -- common/autotest_common.sh@662 -- # es=108 00:08:50.584 18:17:48 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:50.584 18:17:48 -- common/autotest_common.sh@670 -- # es=1 00:08:50.584 18:17:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.584 00:08:50.584 real 0m0.413s 00:08:50.584 user 0m0.213s 00:08:50.584 sys 0m0.095s 00:08:50.584 18:17:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.584 ************************************ 00:08:50.584 END TEST dd_unknown_flag 00:08:50.584 ************************************ 00:08:50.584 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 18:17:48 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:50.584 18:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.584 18:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.584 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 ************************************ 00:08:50.584 START TEST dd_invalid_json 00:08:50.584 ************************************ 00:08:50.584 18:17:48 -- common/autotest_common.sh@1114 -- # invalid_json 00:08:50.584 18:17:48 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:50.584 18:17:48 -- dd/negative_dd.sh@95 -- # : 00:08:50.584 18:17:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:50.584 18:17:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:50.584 18:17:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.584 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.584 18:17:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.584 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.584 18:17:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.584 18:17:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.584 18:17:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.584 18:17:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.584 18:17:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:50.850 [2024-11-17 18:17:48.853463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:50.850 [2024-11-17 18:17:48.853573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71415 ] 00:08:50.850 [2024-11-17 18:17:48.989130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.850 [2024-11-17 18:17:49.029807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.850 [2024-11-17 18:17:49.029972] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:50.850 [2024-11-17 18:17:49.029999] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.850 [2024-11-17 18:17:49.030046] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:50.850 18:17:49 -- common/autotest_common.sh@653 -- # es=234 00:08:50.850 18:17:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.850 18:17:49 -- common/autotest_common.sh@662 -- # es=106 00:08:50.850 18:17:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:50.850 18:17:49 -- common/autotest_common.sh@670 -- # es=1 00:08:50.850 18:17:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.850 00:08:50.850 real 0m0.297s 00:08:50.850 user 0m0.131s 00:08:50.850 sys 0m0.065s 00:08:50.850 18:17:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.850 ************************************ 00:08:50.850 END TEST dd_invalid_json 00:08:50.850 ************************************ 00:08:50.850 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.108 00:08:51.108 real 0m2.534s 00:08:51.108 user 0m1.212s 00:08:51.108 sys 0m0.954s 00:08:51.108 18:17:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.108 ************************************ 00:08:51.108 END TEST spdk_dd_negative 00:08:51.108 ************************************ 00:08:51.108 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.108 00:08:51.108 real 1m1.403s 00:08:51.108 user 0m36.813s 00:08:51.108 sys 0m15.392s 00:08:51.108 18:17:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.108 ************************************ 00:08:51.108 END TEST spdk_dd 00:08:51.108 ************************************ 00:08:51.108 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.108 18:17:49 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:51.108 18:17:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.108 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.108 18:17:49 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:51.108 18:17:49 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:51.108 18:17:49 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:51.108 18:17:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.108 18:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.108 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.108 ************************************ 00:08:51.108 START TEST nvmf_tcp 00:08:51.108 ************************************ 00:08:51.108 18:17:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:51.108 * Looking for test storage... 00:08:51.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:51.108 18:17:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:51.108 18:17:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:51.108 18:17:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:51.367 18:17:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:51.367 18:17:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:51.367 18:17:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:51.367 18:17:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:51.367 18:17:49 -- scripts/common.sh@335 -- # IFS=.-: 00:08:51.367 18:17:49 -- scripts/common.sh@335 -- # read -ra ver1 00:08:51.367 18:17:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.367 18:17:49 -- scripts/common.sh@336 -- # read -ra ver2 00:08:51.367 18:17:49 -- scripts/common.sh@337 -- # local 'op=<' 00:08:51.367 18:17:49 -- scripts/common.sh@339 -- # ver1_l=2 00:08:51.367 18:17:49 -- scripts/common.sh@340 -- # ver2_l=1 00:08:51.367 18:17:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:51.367 18:17:49 -- scripts/common.sh@343 -- # case "$op" in 00:08:51.368 18:17:49 -- scripts/common.sh@344 -- # : 1 00:08:51.368 18:17:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:51.368 18:17:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.368 18:17:49 -- scripts/common.sh@364 -- # decimal 1 00:08:51.368 18:17:49 -- scripts/common.sh@352 -- # local d=1 00:08:51.368 18:17:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.368 18:17:49 -- scripts/common.sh@354 -- # echo 1 00:08:51.368 18:17:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:51.368 18:17:49 -- scripts/common.sh@365 -- # decimal 2 00:08:51.368 18:17:49 -- scripts/common.sh@352 -- # local d=2 00:08:51.368 18:17:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.368 18:17:49 -- scripts/common.sh@354 -- # echo 2 00:08:51.368 18:17:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:51.368 18:17:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:51.368 18:17:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:51.368 18:17:49 -- scripts/common.sh@367 -- # return 0 00:08:51.368 18:17:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.368 18:17:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.368 --rc genhtml_branch_coverage=1 00:08:51.368 --rc genhtml_function_coverage=1 00:08:51.368 --rc genhtml_legend=1 00:08:51.368 --rc geninfo_all_blocks=1 00:08:51.368 --rc geninfo_unexecuted_blocks=1 00:08:51.368 00:08:51.368 ' 00:08:51.368 18:17:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.368 --rc genhtml_branch_coverage=1 00:08:51.368 --rc genhtml_function_coverage=1 00:08:51.368 --rc genhtml_legend=1 00:08:51.368 --rc geninfo_all_blocks=1 00:08:51.368 --rc geninfo_unexecuted_blocks=1 00:08:51.368 00:08:51.368 ' 00:08:51.368 18:17:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.368 --rc genhtml_branch_coverage=1 00:08:51.368 --rc genhtml_function_coverage=1 00:08:51.368 --rc genhtml_legend=1 00:08:51.368 --rc geninfo_all_blocks=1 00:08:51.368 --rc geninfo_unexecuted_blocks=1 00:08:51.368 00:08:51.368 ' 00:08:51.368 18:17:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.368 --rc genhtml_branch_coverage=1 00:08:51.368 --rc genhtml_function_coverage=1 00:08:51.368 --rc genhtml_legend=1 00:08:51.368 --rc geninfo_all_blocks=1 00:08:51.368 --rc geninfo_unexecuted_blocks=1 00:08:51.368 00:08:51.368 ' 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.368 18:17:49 -- nvmf/common.sh@7 -- # uname -s 00:08:51.368 18:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.368 18:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.368 18:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.368 18:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.368 18:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.368 18:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.368 18:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.368 18:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.368 18:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.368 18:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.368 18:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:08:51.368 18:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:08:51.368 18:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.368 18:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.368 18:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.368 18:17:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.368 18:17:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.368 18:17:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.368 18:17:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.368 18:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.368 18:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.368 18:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.368 18:17:49 -- paths/export.sh@5 -- # export PATH 00:08:51.368 18:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.368 18:17:49 -- nvmf/common.sh@46 -- # : 0 00:08:51.368 18:17:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:51.368 18:17:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:51.368 18:17:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:51.368 18:17:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.368 18:17:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.368 18:17:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:51.368 18:17:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:51.368 18:17:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:51.368 18:17:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.368 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:51.368 18:17:49 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.368 18:17:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.368 18:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.368 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.368 ************************************ 00:08:51.368 START TEST nvmf_host_management 00:08:51.368 ************************************ 00:08:51.368 18:17:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.368 * Looking for test storage... 00:08:51.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.368 18:17:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:51.368 18:17:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:51.368 18:17:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:51.628 18:17:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:51.628 18:17:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:51.628 18:17:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:51.628 18:17:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:51.628 18:17:49 -- scripts/common.sh@335 -- # IFS=.-: 00:08:51.628 18:17:49 -- scripts/common.sh@335 -- # read -ra ver1 00:08:51.628 18:17:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.628 18:17:49 -- scripts/common.sh@336 -- # read -ra ver2 00:08:51.628 18:17:49 -- scripts/common.sh@337 -- # local 'op=<' 00:08:51.628 18:17:49 -- scripts/common.sh@339 -- # ver1_l=2 00:08:51.628 18:17:49 -- scripts/common.sh@340 -- # ver2_l=1 00:08:51.628 18:17:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:51.628 18:17:49 -- scripts/common.sh@343 -- # case "$op" in 00:08:51.628 18:17:49 -- scripts/common.sh@344 -- # : 1 00:08:51.628 18:17:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:51.628 18:17:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.628 18:17:49 -- scripts/common.sh@364 -- # decimal 1 00:08:51.628 18:17:49 -- scripts/common.sh@352 -- # local d=1 00:08:51.628 18:17:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.628 18:17:49 -- scripts/common.sh@354 -- # echo 1 00:08:51.628 18:17:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:51.628 18:17:49 -- scripts/common.sh@365 -- # decimal 2 00:08:51.628 18:17:49 -- scripts/common.sh@352 -- # local d=2 00:08:51.628 18:17:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.628 18:17:49 -- scripts/common.sh@354 -- # echo 2 00:08:51.628 18:17:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:51.628 18:17:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:51.628 18:17:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:51.628 18:17:49 -- scripts/common.sh@367 -- # return 0 00:08:51.628 18:17:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.628 18:17:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.628 --rc genhtml_branch_coverage=1 00:08:51.628 --rc genhtml_function_coverage=1 00:08:51.628 --rc genhtml_legend=1 00:08:51.628 --rc geninfo_all_blocks=1 00:08:51.628 --rc geninfo_unexecuted_blocks=1 00:08:51.628 00:08:51.628 ' 00:08:51.628 18:17:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.628 --rc genhtml_branch_coverage=1 00:08:51.628 --rc genhtml_function_coverage=1 00:08:51.628 --rc genhtml_legend=1 00:08:51.628 --rc geninfo_all_blocks=1 00:08:51.628 --rc geninfo_unexecuted_blocks=1 00:08:51.628 00:08:51.628 ' 00:08:51.628 18:17:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.628 --rc genhtml_branch_coverage=1 00:08:51.628 --rc genhtml_function_coverage=1 00:08:51.628 --rc genhtml_legend=1 00:08:51.628 --rc geninfo_all_blocks=1 00:08:51.628 --rc geninfo_unexecuted_blocks=1 00:08:51.628 00:08:51.628 ' 00:08:51.629 18:17:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:51.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.629 --rc genhtml_branch_coverage=1 00:08:51.629 --rc genhtml_function_coverage=1 00:08:51.629 --rc genhtml_legend=1 00:08:51.629 --rc geninfo_all_blocks=1 00:08:51.629 --rc geninfo_unexecuted_blocks=1 00:08:51.629 00:08:51.629 ' 00:08:51.629 18:17:49 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.629 18:17:49 -- nvmf/common.sh@7 -- # uname -s 00:08:51.629 18:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.629 18:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.629 18:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.629 18:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.629 18:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.629 18:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.629 18:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.629 18:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.629 18:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.629 18:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.629 18:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:08:51.629 18:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:08:51.629 18:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.629 18:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.629 18:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.629 18:17:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.629 18:17:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.629 18:17:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.629 18:17:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.629 18:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.629 18:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.629 18:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.629 18:17:49 -- paths/export.sh@5 -- # export PATH 00:08:51.629 18:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.629 18:17:49 -- nvmf/common.sh@46 -- # : 0 00:08:51.629 18:17:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:51.629 18:17:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:51.629 18:17:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:51.629 18:17:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.629 18:17:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.629 18:17:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:51.629 18:17:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:51.629 18:17:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:51.629 18:17:49 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.629 18:17:49 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.629 18:17:49 -- target/host_management.sh@104 -- # nvmftestinit 00:08:51.629 18:17:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:51.629 18:17:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.629 18:17:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:51.629 18:17:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:51.629 18:17:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:51.629 18:17:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.629 18:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.629 18:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.629 18:17:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:51.629 18:17:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:51.629 18:17:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:51.629 18:17:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:51.629 18:17:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:51.629 18:17:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:51.629 18:17:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.629 18:17:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.629 18:17:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:51.629 18:17:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:51.629 18:17:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.629 18:17:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.629 18:17:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.629 18:17:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.629 18:17:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.629 18:17:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.629 18:17:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.629 18:17:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.629 18:17:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:51.629 Cannot find device "nvmf_init_br" 00:08:51.629 18:17:49 -- nvmf/common.sh@153 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:51.629 Cannot find device "nvmf_tgt_br" 00:08:51.629 18:17:49 -- nvmf/common.sh@154 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.629 Cannot find device "nvmf_tgt_br2" 00:08:51.629 18:17:49 -- nvmf/common.sh@155 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:51.629 Cannot find device "nvmf_init_br" 00:08:51.629 18:17:49 -- nvmf/common.sh@156 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:51.629 Cannot find device "nvmf_tgt_br" 00:08:51.629 18:17:49 -- nvmf/common.sh@157 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:51.629 Cannot find device "nvmf_tgt_br2" 00:08:51.629 18:17:49 -- nvmf/common.sh@158 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:51.629 Cannot find device "nvmf_br" 00:08:51.629 18:17:49 -- nvmf/common.sh@159 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:51.629 Cannot find device "nvmf_init_if" 00:08:51.629 18:17:49 -- nvmf/common.sh@160 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.629 18:17:49 -- nvmf/common.sh@161 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.629 18:17:49 -- nvmf/common.sh@162 -- # true 00:08:51.629 18:17:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.629 18:17:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.629 18:17:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.629 18:17:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.629 18:17:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.629 18:17:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.629 18:17:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.629 18:17:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:51.888 18:17:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:51.888 18:17:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:51.888 18:17:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:51.888 18:17:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:51.888 18:17:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:51.888 18:17:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.888 18:17:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.888 18:17:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.888 18:17:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:51.888 18:17:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:51.888 18:17:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.888 18:17:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.888 18:17:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.888 18:17:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.888 18:17:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.888 18:17:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:51.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:08:51.889 00:08:51.889 --- 10.0.0.2 ping statistics --- 00:08:51.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.889 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:51.889 18:17:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:51.889 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.889 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:51.889 00:08:51.889 --- 10.0.0.3 ping statistics --- 00:08:51.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.889 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:51.889 18:17:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:51.889 00:08:51.889 --- 10.0.0.1 ping statistics --- 00:08:51.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.889 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:51.889 18:17:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.889 18:17:50 -- nvmf/common.sh@421 -- # return 0 00:08:51.889 18:17:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:51.889 18:17:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.889 18:17:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:51.889 18:17:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:51.889 18:17:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.889 18:17:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:51.889 18:17:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:51.889 18:17:50 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:51.889 18:17:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.889 18:17:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.889 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:51.889 ************************************ 00:08:51.889 START TEST nvmf_host_management 00:08:51.889 ************************************ 00:08:51.889 18:17:50 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:08:52.148 18:17:50 -- target/host_management.sh@69 -- # starttarget 00:08:52.148 18:17:50 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:52.148 18:17:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.148 18:17:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.148 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.148 18:17:50 -- nvmf/common.sh@469 -- # nvmfpid=71687 00:08:52.148 18:17:50 -- nvmf/common.sh@470 -- # waitforlisten 71687 00:08:52.148 18:17:50 -- common/autotest_common.sh@829 -- # '[' -z 71687 ']' 00:08:52.148 18:17:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.148 18:17:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:52.148 18:17:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.148 18:17:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.148 18:17:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.148 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.148 [2024-11-17 18:17:50.206437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:52.148 [2024-11-17 18:17:50.206514] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.148 [2024-11-17 18:17:50.343251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.148 [2024-11-17 18:17:50.385599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.148 [2024-11-17 18:17:50.385789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.148 [2024-11-17 18:17:50.385805] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.148 [2024-11-17 18:17:50.385816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.148 [2024-11-17 18:17:50.385994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.148 [2024-11-17 18:17:50.386653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.148 [2024-11-17 18:17:50.386869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.148 [2024-11-17 18:17:50.386877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.407 18:17:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.407 18:17:50 -- common/autotest_common.sh@862 -- # return 0 00:08:52.407 18:17:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:52.407 18:17:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.407 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.408 18:17:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.408 18:17:50 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.408 18:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.408 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.408 [2024-11-17 18:17:50.506250] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.408 18:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.408 18:17:50 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:52.408 18:17:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.408 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.408 18:17:50 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:52.408 18:17:50 -- target/host_management.sh@23 -- # cat 00:08:52.408 18:17:50 -- target/host_management.sh@30 -- # rpc_cmd 00:08:52.408 18:17:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.408 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.408 Malloc0 00:08:52.408 [2024-11-17 18:17:50.575878] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.408 18:17:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.408 18:17:50 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:52.408 18:17:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.408 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.408 18:17:50 -- target/host_management.sh@73 -- # perfpid=71739 00:08:52.408 18:17:50 -- target/host_management.sh@74 -- # waitforlisten 71739 /var/tmp/bdevperf.sock 00:08:52.408 18:17:50 -- common/autotest_common.sh@829 -- # '[' -z 71739 ']' 00:08:52.408 18:17:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.408 18:17:50 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:52.408 18:17:50 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:52.408 18:17:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.408 18:17:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.408 18:17:50 -- nvmf/common.sh@520 -- # config=() 00:08:52.408 18:17:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.408 18:17:50 -- nvmf/common.sh@520 -- # local subsystem config 00:08:52.408 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.408 18:17:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:52.408 18:17:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:52.408 { 00:08:52.408 "params": { 00:08:52.408 "name": "Nvme$subsystem", 00:08:52.408 "trtype": "$TEST_TRANSPORT", 00:08:52.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.408 "adrfam": "ipv4", 00:08:52.408 "trsvcid": "$NVMF_PORT", 00:08:52.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.408 "hdgst": ${hdgst:-false}, 00:08:52.408 "ddgst": ${ddgst:-false} 00:08:52.408 }, 00:08:52.408 "method": "bdev_nvme_attach_controller" 00:08:52.408 } 00:08:52.408 EOF 00:08:52.408 )") 00:08:52.408 18:17:50 -- nvmf/common.sh@542 -- # cat 00:08:52.408 18:17:50 -- nvmf/common.sh@544 -- # jq . 00:08:52.408 18:17:50 -- nvmf/common.sh@545 -- # IFS=, 00:08:52.408 18:17:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:52.408 "params": { 00:08:52.408 "name": "Nvme0", 00:08:52.408 "trtype": "tcp", 00:08:52.408 "traddr": "10.0.0.2", 00:08:52.408 "adrfam": "ipv4", 00:08:52.408 "trsvcid": "4420", 00:08:52.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:52.408 "hdgst": false, 00:08:52.408 "ddgst": false 00:08:52.408 }, 00:08:52.408 "method": "bdev_nvme_attach_controller" 00:08:52.408 }' 00:08:52.668 [2024-11-17 18:17:50.675298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:52.668 [2024-11-17 18:17:50.675381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71739 ] 00:08:52.668 [2024-11-17 18:17:50.805447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.668 [2024-11-17 18:17:50.837614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.927 Running I/O for 10 seconds... 00:08:53.497 18:17:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.497 18:17:51 -- common/autotest_common.sh@862 -- # return 0 00:08:53.497 18:17:51 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:53.497 18:17:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.497 18:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 18:17:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.497 18:17:51 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.497 18:17:51 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:53.497 18:17:51 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:53.497 18:17:51 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:53.497 18:17:51 -- target/host_management.sh@52 -- # local ret=1 00:08:53.497 18:17:51 -- target/host_management.sh@53 -- # local i 00:08:53.497 18:17:51 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:53.497 18:17:51 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:53.497 18:17:51 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:53.497 18:17:51 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:53.497 18:17:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.497 18:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 18:17:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.497 18:17:51 -- target/host_management.sh@55 -- # read_io_count=2175 00:08:53.497 18:17:51 -- target/host_management.sh@58 -- # '[' 2175 -ge 100 ']' 00:08:53.497 18:17:51 -- target/host_management.sh@59 -- # ret=0 00:08:53.497 18:17:51 -- target/host_management.sh@60 -- # break 00:08:53.497 18:17:51 -- target/host_management.sh@64 -- # return 0 00:08:53.497 18:17:51 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.497 18:17:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.497 18:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 [2024-11-17 18:17:51.752958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.497 [2024-11-17 18:17:51.753176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9b4f0 is same with the state(5) to be set 00:08:53.498 [2024-11-17 18:17:51.753465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.753985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.753994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.754004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-11-17 18:17:51.754013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.498 [2024-11-17 18:17:51.754024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.499 [2024-11-17 18:17:51.754804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.499 [2024-11-17 18:17:51.754813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.500 [2024-11-17 18:17:51.754823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.500 [2024-11-17 18:17:51.754832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.500 [2024-11-17 18:17:51.754842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.500 [2024-11-17 18:17:51.754864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.500 [2024-11-17 18:17:51.754878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.500 [2024-11-17 18:17:51.754887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.500 [2024-11-17 18:17:51.754950] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19a0120 was disconnected and freed. reset controller. 00:08:53.500 [2024-11-17 18:17:51.756108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:53.824 task offset: 34560 on job bdev=Nvme0n1 fails 00:08:53.824 00:08:53.824 Latency(us) 00:08:53.824 [2024-11-17T18:17:52.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.824 [2024-11-17T18:17:52.091Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:53.824 [2024-11-17T18:17:52.091Z] Job: Nvme0n1 ended in about 0.79 seconds with error 00:08:53.824 Verification LBA range: start 0x0 length 0x400 00:08:53.824 Nvme0n1 : 0.79 2928.99 183.06 81.29 0.00 20968.15 6732.33 24546.21 00:08:53.824 [2024-11-17T18:17:52.091Z] =================================================================================================================== 00:08:53.824 [2024-11-17T18:17:52.091Z] Total : 2928.99 183.06 81.29 0.00 20968.15 6732.33 24546.21 00:08:53.824 18:17:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.824 18:17:51 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.824 18:17:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.824 [2024-11-17 18:17:51.758149] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.824 [2024-11-17 18:17:51.758172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a26a0 (9): Bad file descriptor 00:08:53.824 18:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:53.824 18:17:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.824 18:17:51 -- target/host_management.sh@87 -- # sleep 1 00:08:53.824 [2024-11-17 18:17:51.768488] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.759 18:17:52 -- target/host_management.sh@91 -- # kill -9 71739 00:08:54.759 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71739) - No such process 00:08:54.759 18:17:52 -- target/host_management.sh@91 -- # true 00:08:54.759 18:17:52 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:54.759 18:17:52 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:54.759 18:17:52 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:54.759 18:17:52 -- nvmf/common.sh@520 -- # config=() 00:08:54.759 18:17:52 -- nvmf/common.sh@520 -- # local subsystem config 00:08:54.759 18:17:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:54.759 18:17:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:54.759 { 00:08:54.759 "params": { 00:08:54.759 "name": "Nvme$subsystem", 00:08:54.759 "trtype": "$TEST_TRANSPORT", 00:08:54.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.759 "adrfam": "ipv4", 00:08:54.759 "trsvcid": "$NVMF_PORT", 00:08:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.759 "hdgst": ${hdgst:-false}, 00:08:54.759 "ddgst": ${ddgst:-false} 00:08:54.759 }, 00:08:54.759 "method": "bdev_nvme_attach_controller" 00:08:54.759 } 00:08:54.759 EOF 00:08:54.759 )") 00:08:54.759 18:17:52 -- nvmf/common.sh@542 -- # cat 00:08:54.759 18:17:52 -- nvmf/common.sh@544 -- # jq . 00:08:54.759 18:17:52 -- nvmf/common.sh@545 -- # IFS=, 00:08:54.759 18:17:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:54.759 "params": { 00:08:54.759 "name": "Nvme0", 00:08:54.759 "trtype": "tcp", 00:08:54.759 "traddr": "10.0.0.2", 00:08:54.759 "adrfam": "ipv4", 00:08:54.759 "trsvcid": "4420", 00:08:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.759 "hdgst": false, 00:08:54.759 "ddgst": false 00:08:54.759 }, 00:08:54.759 "method": "bdev_nvme_attach_controller" 00:08:54.759 }' 00:08:54.759 [2024-11-17 18:17:52.826311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:54.759 [2024-11-17 18:17:52.826423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71777 ] 00:08:54.759 [2024-11-17 18:17:52.966429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.759 [2024-11-17 18:17:53.009841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.018 Running I/O for 1 seconds... 00:08:55.951 00:08:55.951 Latency(us) 00:08:55.951 [2024-11-17T18:17:54.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.951 [2024-11-17T18:17:54.218Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:55.951 Verification LBA range: start 0x0 length 0x400 00:08:55.951 Nvme0n1 : 1.02 2910.67 181.92 0.00 0.00 21625.55 1072.41 32172.22 00:08:55.951 [2024-11-17T18:17:54.218Z] =================================================================================================================== 00:08:55.951 [2024-11-17T18:17:54.218Z] Total : 2910.67 181.92 0.00 0.00 21625.55 1072.41 32172.22 00:08:56.210 18:17:54 -- target/host_management.sh@101 -- # stoptarget 00:08:56.210 18:17:54 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:56.210 18:17:54 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:56.210 18:17:54 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:56.210 18:17:54 -- target/host_management.sh@40 -- # nvmftestfini 00:08:56.210 18:17:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:56.210 18:17:54 -- nvmf/common.sh@116 -- # sync 00:08:56.210 18:17:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:56.210 18:17:54 -- nvmf/common.sh@119 -- # set +e 00:08:56.210 18:17:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:56.210 18:17:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:56.210 rmmod nvme_tcp 00:08:56.210 rmmod nvme_fabrics 00:08:56.210 rmmod nvme_keyring 00:08:56.210 18:17:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:56.210 18:17:54 -- nvmf/common.sh@123 -- # set -e 00:08:56.210 18:17:54 -- nvmf/common.sh@124 -- # return 0 00:08:56.210 18:17:54 -- nvmf/common.sh@477 -- # '[' -n 71687 ']' 00:08:56.210 18:17:54 -- nvmf/common.sh@478 -- # killprocess 71687 00:08:56.210 18:17:54 -- common/autotest_common.sh@936 -- # '[' -z 71687 ']' 00:08:56.210 18:17:54 -- common/autotest_common.sh@940 -- # kill -0 71687 00:08:56.210 18:17:54 -- common/autotest_common.sh@941 -- # uname 00:08:56.210 18:17:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.210 18:17:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71687 00:08:56.469 18:17:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:56.469 18:17:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:56.469 killing process with pid 71687 00:08:56.469 18:17:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71687' 00:08:56.469 18:17:54 -- common/autotest_common.sh@955 -- # kill 71687 00:08:56.469 18:17:54 -- common/autotest_common.sh@960 -- # wait 71687 00:08:56.469 [2024-11-17 18:17:54.610156] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:56.469 18:17:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:56.469 18:17:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:56.469 18:17:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:56.469 18:17:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.469 18:17:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:56.469 18:17:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.469 18:17:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.469 18:17:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.469 18:17:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:56.469 00:08:56.469 real 0m4.521s 00:08:56.469 user 0m19.250s 00:08:56.469 sys 0m1.163s 00:08:56.469 18:17:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.469 18:17:54 -- common/autotest_common.sh@10 -- # set +x 00:08:56.469 ************************************ 00:08:56.469 END TEST nvmf_host_management 00:08:56.469 ************************************ 00:08:56.469 18:17:54 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:56.469 00:08:56.469 real 0m5.227s 00:08:56.469 user 0m19.454s 00:08:56.469 sys 0m1.433s 00:08:56.469 18:17:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.469 ************************************ 00:08:56.469 END TEST nvmf_host_management 00:08:56.469 ************************************ 00:08:56.469 18:17:54 -- common/autotest_common.sh@10 -- # set +x 00:08:56.727 18:17:54 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:56.727 18:17:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:56.727 18:17:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.727 18:17:54 -- common/autotest_common.sh@10 -- # set +x 00:08:56.727 ************************************ 00:08:56.727 START TEST nvmf_lvol 00:08:56.727 ************************************ 00:08:56.727 18:17:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:56.727 * Looking for test storage... 00:08:56.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.727 18:17:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:56.727 18:17:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:56.727 18:17:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:56.727 18:17:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:56.727 18:17:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:56.727 18:17:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:56.727 18:17:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:56.727 18:17:54 -- scripts/common.sh@335 -- # IFS=.-: 00:08:56.727 18:17:54 -- scripts/common.sh@335 -- # read -ra ver1 00:08:56.727 18:17:54 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.727 18:17:54 -- scripts/common.sh@336 -- # read -ra ver2 00:08:56.727 18:17:54 -- scripts/common.sh@337 -- # local 'op=<' 00:08:56.727 18:17:54 -- scripts/common.sh@339 -- # ver1_l=2 00:08:56.727 18:17:54 -- scripts/common.sh@340 -- # ver2_l=1 00:08:56.727 18:17:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:56.727 18:17:54 -- scripts/common.sh@343 -- # case "$op" in 00:08:56.727 18:17:54 -- scripts/common.sh@344 -- # : 1 00:08:56.727 18:17:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:56.727 18:17:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.727 18:17:54 -- scripts/common.sh@364 -- # decimal 1 00:08:56.727 18:17:54 -- scripts/common.sh@352 -- # local d=1 00:08:56.727 18:17:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.727 18:17:54 -- scripts/common.sh@354 -- # echo 1 00:08:56.727 18:17:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:56.727 18:17:54 -- scripts/common.sh@365 -- # decimal 2 00:08:56.727 18:17:54 -- scripts/common.sh@352 -- # local d=2 00:08:56.727 18:17:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.727 18:17:54 -- scripts/common.sh@354 -- # echo 2 00:08:56.727 18:17:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:56.727 18:17:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:56.727 18:17:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:56.727 18:17:54 -- scripts/common.sh@367 -- # return 0 00:08:56.727 18:17:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.727 18:17:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:56.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.727 --rc genhtml_branch_coverage=1 00:08:56.727 --rc genhtml_function_coverage=1 00:08:56.727 --rc genhtml_legend=1 00:08:56.727 --rc geninfo_all_blocks=1 00:08:56.727 --rc geninfo_unexecuted_blocks=1 00:08:56.727 00:08:56.727 ' 00:08:56.727 18:17:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:56.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.727 --rc genhtml_branch_coverage=1 00:08:56.727 --rc genhtml_function_coverage=1 00:08:56.727 --rc genhtml_legend=1 00:08:56.727 --rc geninfo_all_blocks=1 00:08:56.727 --rc geninfo_unexecuted_blocks=1 00:08:56.727 00:08:56.727 ' 00:08:56.727 18:17:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:56.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.727 --rc genhtml_branch_coverage=1 00:08:56.728 --rc genhtml_function_coverage=1 00:08:56.728 --rc genhtml_legend=1 00:08:56.728 --rc geninfo_all_blocks=1 00:08:56.728 --rc geninfo_unexecuted_blocks=1 00:08:56.728 00:08:56.728 ' 00:08:56.728 18:17:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:56.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.728 --rc genhtml_branch_coverage=1 00:08:56.728 --rc genhtml_function_coverage=1 00:08:56.728 --rc genhtml_legend=1 00:08:56.728 --rc geninfo_all_blocks=1 00:08:56.728 --rc geninfo_unexecuted_blocks=1 00:08:56.728 00:08:56.728 ' 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.728 18:17:54 -- nvmf/common.sh@7 -- # uname -s 00:08:56.728 18:17:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.728 18:17:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.728 18:17:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.728 18:17:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.728 18:17:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.728 18:17:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.728 18:17:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.728 18:17:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.728 18:17:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.728 18:17:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.728 18:17:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:08:56.728 18:17:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:08:56.728 18:17:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.728 18:17:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.728 18:17:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.728 18:17:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.728 18:17:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.728 18:17:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.728 18:17:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.728 18:17:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.728 18:17:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.728 18:17:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.728 18:17:54 -- paths/export.sh@5 -- # export PATH 00:08:56.728 18:17:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.728 18:17:54 -- nvmf/common.sh@46 -- # : 0 00:08:56.728 18:17:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:56.728 18:17:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:56.728 18:17:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:56.728 18:17:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.728 18:17:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.728 18:17:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:56.728 18:17:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:56.728 18:17:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.728 18:17:54 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:56.728 18:17:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:56.728 18:17:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.728 18:17:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:56.728 18:17:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:56.728 18:17:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:56.728 18:17:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.728 18:17:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.728 18:17:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.728 18:17:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:56.728 18:17:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:56.728 18:17:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:56.728 18:17:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:56.728 18:17:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:56.728 18:17:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:56.728 18:17:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.728 18:17:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.728 18:17:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:56.728 18:17:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:56.728 18:17:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.728 18:17:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.728 18:17:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.728 18:17:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.728 18:17:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.728 18:17:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.728 18:17:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.728 18:17:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.728 18:17:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:56.987 18:17:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:56.987 Cannot find device "nvmf_tgt_br" 00:08:56.987 18:17:55 -- nvmf/common.sh@154 -- # true 00:08:56.987 18:17:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.987 Cannot find device "nvmf_tgt_br2" 00:08:56.987 18:17:55 -- nvmf/common.sh@155 -- # true 00:08:56.987 18:17:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:56.987 18:17:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:56.987 Cannot find device "nvmf_tgt_br" 00:08:56.987 18:17:55 -- nvmf/common.sh@157 -- # true 00:08:56.987 18:17:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:56.987 Cannot find device "nvmf_tgt_br2" 00:08:56.987 18:17:55 -- nvmf/common.sh@158 -- # true 00:08:56.987 18:17:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:56.987 18:17:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:56.987 18:17:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.987 18:17:55 -- nvmf/common.sh@161 -- # true 00:08:56.987 18:17:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.987 18:17:55 -- nvmf/common.sh@162 -- # true 00:08:56.987 18:17:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.987 18:17:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.987 18:17:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.987 18:17:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.987 18:17:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.987 18:17:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.987 18:17:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.987 18:17:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:56.987 18:17:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:56.987 18:17:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:56.987 18:17:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:56.987 18:17:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:56.987 18:17:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:56.987 18:17:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.987 18:17:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.987 18:17:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.987 18:17:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:56.987 18:17:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:56.987 18:17:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.987 18:17:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:57.246 18:17:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.246 18:17:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.246 18:17:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.246 18:17:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:57.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:57.246 00:08:57.246 --- 10.0.0.2 ping statistics --- 00:08:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.246 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:57.246 18:17:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:57.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:57.246 00:08:57.246 --- 10.0.0.3 ping statistics --- 00:08:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.246 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:57.246 18:17:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:57.246 00:08:57.246 --- 10.0.0.1 ping statistics --- 00:08:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.246 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:57.246 18:17:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.246 18:17:55 -- nvmf/common.sh@421 -- # return 0 00:08:57.246 18:17:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:57.246 18:17:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.246 18:17:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:57.246 18:17:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:57.246 18:17:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.246 18:17:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:57.246 18:17:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:57.246 18:17:55 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:57.246 18:17:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:57.246 18:17:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.246 18:17:55 -- common/autotest_common.sh@10 -- # set +x 00:08:57.246 18:17:55 -- nvmf/common.sh@469 -- # nvmfpid=72007 00:08:57.246 18:17:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:57.246 18:17:55 -- nvmf/common.sh@470 -- # waitforlisten 72007 00:08:57.246 18:17:55 -- common/autotest_common.sh@829 -- # '[' -z 72007 ']' 00:08:57.246 18:17:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.246 18:17:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.246 18:17:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.246 18:17:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.246 18:17:55 -- common/autotest_common.sh@10 -- # set +x 00:08:57.246 [2024-11-17 18:17:55.369277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:57.246 [2024-11-17 18:17:55.369364] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.246 [2024-11-17 18:17:55.495910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.505 [2024-11-17 18:17:55.529124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.505 [2024-11-17 18:17:55.529317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.505 [2024-11-17 18:17:55.529342] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.505 [2024-11-17 18:17:55.529368] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.505 [2024-11-17 18:17:55.529524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.505 [2024-11-17 18:17:55.529653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.505 [2024-11-17 18:17:55.529658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.442 18:17:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.442 18:17:56 -- common/autotest_common.sh@862 -- # return 0 00:08:58.442 18:17:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:58.442 18:17:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.442 18:17:56 -- common/autotest_common.sh@10 -- # set +x 00:08:58.442 18:17:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.442 18:17:56 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:58.442 [2024-11-17 18:17:56.638260] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.442 18:17:56 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.701 18:17:56 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:58.701 18:17:56 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.961 18:17:57 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:58.961 18:17:57 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:59.220 18:17:57 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:59.479 18:17:57 -- target/nvmf_lvol.sh@29 -- # lvs=cf4bf0a8-0b99-4754-81b4-5cfeb69a0815 00:08:59.479 18:17:57 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cf4bf0a8-0b99-4754-81b4-5cfeb69a0815 lvol 20 00:08:59.739 18:17:57 -- target/nvmf_lvol.sh@32 -- # lvol=fc1b6990-9029-4426-87c8-5de9c1166686 00:08:59.739 18:17:57 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.998 18:17:58 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc1b6990-9029-4426-87c8-5de9c1166686 00:09:00.257 18:17:58 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.516 [2024-11-17 18:17:58.602076] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.516 18:17:58 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.775 18:17:58 -- target/nvmf_lvol.sh@42 -- # perf_pid=72088 00:09:00.775 18:17:58 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:00.775 18:17:58 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:01.719 18:17:59 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot fc1b6990-9029-4426-87c8-5de9c1166686 MY_SNAPSHOT 00:09:01.978 18:18:00 -- target/nvmf_lvol.sh@47 -- # snapshot=1c1151d4-4a31-4406-b39e-8b9ea74c10ca 00:09:01.978 18:18:00 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize fc1b6990-9029-4426-87c8-5de9c1166686 30 00:09:02.546 18:18:00 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1c1151d4-4a31-4406-b39e-8b9ea74c10ca MY_CLONE 00:09:02.546 18:18:00 -- target/nvmf_lvol.sh@49 -- # clone=97eb1c0e-7b6b-47e3-853d-1a2d661da0a5 00:09:02.546 18:18:00 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 97eb1c0e-7b6b-47e3-853d-1a2d661da0a5 00:09:03.114 18:18:01 -- target/nvmf_lvol.sh@53 -- # wait 72088 00:09:11.227 Initializing NVMe Controllers 00:09:11.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:11.227 Controller IO queue size 128, less than required. 00:09:11.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:11.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:11.227 Initialization complete. Launching workers. 00:09:11.227 ======================================================== 00:09:11.227 Latency(us) 00:09:11.227 Device Information : IOPS MiB/s Average min max 00:09:11.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9914.80 38.73 12910.22 2488.68 68695.93 00:09:11.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9841.00 38.44 13006.97 2999.96 55844.79 00:09:11.227 ======================================================== 00:09:11.227 Total : 19755.80 77.17 12958.41 2488.68 68695.93 00:09:11.227 00:09:11.227 18:18:09 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.227 18:18:09 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fc1b6990-9029-4426-87c8-5de9c1166686 00:09:11.486 18:18:09 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf4bf0a8-0b99-4754-81b4-5cfeb69a0815 00:09:11.744 18:18:09 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:11.745 18:18:09 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:11.745 18:18:09 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:11.745 18:18:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:11.745 18:18:09 -- nvmf/common.sh@116 -- # sync 00:09:12.003 18:18:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:12.003 18:18:10 -- nvmf/common.sh@119 -- # set +e 00:09:12.003 18:18:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:12.003 18:18:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:12.003 rmmod nvme_tcp 00:09:12.003 rmmod nvme_fabrics 00:09:12.003 rmmod nvme_keyring 00:09:12.003 18:18:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:12.003 18:18:10 -- nvmf/common.sh@123 -- # set -e 00:09:12.003 18:18:10 -- nvmf/common.sh@124 -- # return 0 00:09:12.003 18:18:10 -- nvmf/common.sh@477 -- # '[' -n 72007 ']' 00:09:12.003 18:18:10 -- nvmf/common.sh@478 -- # killprocess 72007 00:09:12.003 18:18:10 -- common/autotest_common.sh@936 -- # '[' -z 72007 ']' 00:09:12.003 18:18:10 -- common/autotest_common.sh@940 -- # kill -0 72007 00:09:12.003 18:18:10 -- common/autotest_common.sh@941 -- # uname 00:09:12.003 18:18:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:12.003 18:18:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72007 00:09:12.003 killing process with pid 72007 00:09:12.003 18:18:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:12.003 18:18:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:12.003 18:18:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72007' 00:09:12.003 18:18:10 -- common/autotest_common.sh@955 -- # kill 72007 00:09:12.003 18:18:10 -- common/autotest_common.sh@960 -- # wait 72007 00:09:12.262 18:18:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:12.262 18:18:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:12.262 18:18:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:12.262 18:18:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.262 18:18:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:12.262 18:18:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.262 18:18:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.262 18:18:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.262 18:18:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:12.262 ************************************ 00:09:12.262 END TEST nvmf_lvol 00:09:12.262 ************************************ 00:09:12.262 00:09:12.262 real 0m15.602s 00:09:12.262 user 1m4.793s 00:09:12.262 sys 0m4.464s 00:09:12.262 18:18:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.262 18:18:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 18:18:10 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:12.262 18:18:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:12.262 18:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.262 18:18:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.262 ************************************ 00:09:12.262 START TEST nvmf_lvs_grow 00:09:12.262 ************************************ 00:09:12.262 18:18:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:12.262 * Looking for test storage... 00:09:12.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.263 18:18:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:12.263 18:18:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:12.263 18:18:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:12.522 18:18:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:12.522 18:18:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:12.522 18:18:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:12.522 18:18:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:12.522 18:18:10 -- scripts/common.sh@335 -- # IFS=.-: 00:09:12.522 18:18:10 -- scripts/common.sh@335 -- # read -ra ver1 00:09:12.522 18:18:10 -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.522 18:18:10 -- scripts/common.sh@336 -- # read -ra ver2 00:09:12.522 18:18:10 -- scripts/common.sh@337 -- # local 'op=<' 00:09:12.522 18:18:10 -- scripts/common.sh@339 -- # ver1_l=2 00:09:12.522 18:18:10 -- scripts/common.sh@340 -- # ver2_l=1 00:09:12.522 18:18:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:12.522 18:18:10 -- scripts/common.sh@343 -- # case "$op" in 00:09:12.522 18:18:10 -- scripts/common.sh@344 -- # : 1 00:09:12.522 18:18:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:12.522 18:18:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.522 18:18:10 -- scripts/common.sh@364 -- # decimal 1 00:09:12.522 18:18:10 -- scripts/common.sh@352 -- # local d=1 00:09:12.522 18:18:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.522 18:18:10 -- scripts/common.sh@354 -- # echo 1 00:09:12.522 18:18:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:12.522 18:18:10 -- scripts/common.sh@365 -- # decimal 2 00:09:12.522 18:18:10 -- scripts/common.sh@352 -- # local d=2 00:09:12.522 18:18:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.522 18:18:10 -- scripts/common.sh@354 -- # echo 2 00:09:12.522 18:18:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:12.522 18:18:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:12.522 18:18:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:12.522 18:18:10 -- scripts/common.sh@367 -- # return 0 00:09:12.522 18:18:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.522 18:18:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:12.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.522 --rc genhtml_branch_coverage=1 00:09:12.522 --rc genhtml_function_coverage=1 00:09:12.522 --rc genhtml_legend=1 00:09:12.522 --rc geninfo_all_blocks=1 00:09:12.522 --rc geninfo_unexecuted_blocks=1 00:09:12.522 00:09:12.522 ' 00:09:12.522 18:18:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:12.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.522 --rc genhtml_branch_coverage=1 00:09:12.522 --rc genhtml_function_coverage=1 00:09:12.522 --rc genhtml_legend=1 00:09:12.522 --rc geninfo_all_blocks=1 00:09:12.522 --rc geninfo_unexecuted_blocks=1 00:09:12.522 00:09:12.522 ' 00:09:12.522 18:18:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:12.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.522 --rc genhtml_branch_coverage=1 00:09:12.522 --rc genhtml_function_coverage=1 00:09:12.522 --rc genhtml_legend=1 00:09:12.522 --rc geninfo_all_blocks=1 00:09:12.522 --rc geninfo_unexecuted_blocks=1 00:09:12.522 00:09:12.522 ' 00:09:12.522 18:18:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:12.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.522 --rc genhtml_branch_coverage=1 00:09:12.522 --rc genhtml_function_coverage=1 00:09:12.522 --rc genhtml_legend=1 00:09:12.522 --rc geninfo_all_blocks=1 00:09:12.522 --rc geninfo_unexecuted_blocks=1 00:09:12.522 00:09:12.522 ' 00:09:12.522 18:18:10 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.522 18:18:10 -- nvmf/common.sh@7 -- # uname -s 00:09:12.522 18:18:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.522 18:18:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.522 18:18:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.522 18:18:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.522 18:18:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.522 18:18:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.522 18:18:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.522 18:18:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.522 18:18:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.522 18:18:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.522 18:18:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:09:12.522 18:18:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:09:12.522 18:18:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.522 18:18:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.522 18:18:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.522 18:18:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.522 18:18:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.522 18:18:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.522 18:18:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.522 18:18:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.522 18:18:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.522 18:18:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.522 18:18:10 -- paths/export.sh@5 -- # export PATH 00:09:12.522 18:18:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.522 18:18:10 -- nvmf/common.sh@46 -- # : 0 00:09:12.522 18:18:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:12.522 18:18:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:12.522 18:18:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:12.522 18:18:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.522 18:18:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.522 18:18:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:12.522 18:18:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:12.522 18:18:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:12.522 18:18:10 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.522 18:18:10 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:12.522 18:18:10 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:12.522 18:18:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:12.522 18:18:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.522 18:18:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:12.522 18:18:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:12.522 18:18:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:12.522 18:18:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.522 18:18:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.523 18:18:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.523 18:18:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:12.523 18:18:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:12.523 18:18:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:12.523 18:18:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:12.523 18:18:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:12.523 18:18:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:12.523 18:18:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.523 18:18:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.523 18:18:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.523 18:18:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:12.523 18:18:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.523 18:18:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.523 18:18:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.523 18:18:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.523 18:18:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.523 18:18:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.523 18:18:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.523 18:18:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.523 18:18:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:12.523 18:18:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:12.523 Cannot find device "nvmf_tgt_br" 00:09:12.523 18:18:10 -- nvmf/common.sh@154 -- # true 00:09:12.523 18:18:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.523 Cannot find device "nvmf_tgt_br2" 00:09:12.523 18:18:10 -- nvmf/common.sh@155 -- # true 00:09:12.523 18:18:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:12.523 18:18:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:12.523 Cannot find device "nvmf_tgt_br" 00:09:12.523 18:18:10 -- nvmf/common.sh@157 -- # true 00:09:12.523 18:18:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:12.523 Cannot find device "nvmf_tgt_br2" 00:09:12.523 18:18:10 -- nvmf/common.sh@158 -- # true 00:09:12.523 18:18:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:12.523 18:18:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:12.523 18:18:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.523 18:18:10 -- nvmf/common.sh@161 -- # true 00:09:12.523 18:18:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.523 18:18:10 -- nvmf/common.sh@162 -- # true 00:09:12.523 18:18:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.523 18:18:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.523 18:18:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.523 18:18:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.782 18:18:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.782 18:18:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.782 18:18:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.782 18:18:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.782 18:18:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.782 18:18:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:12.782 18:18:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:12.782 18:18:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:12.782 18:18:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:12.782 18:18:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.782 18:18:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.782 18:18:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.782 18:18:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:12.782 18:18:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:12.782 18:18:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.782 18:18:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.782 18:18:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.782 18:18:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.782 18:18:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.782 18:18:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:12.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:12.782 00:09:12.782 --- 10.0.0.2 ping statistics --- 00:09:12.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.782 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:12.782 18:18:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:12.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:09:12.782 00:09:12.782 --- 10.0.0.3 ping statistics --- 00:09:12.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.782 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:12.782 18:18:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:12.782 00:09:12.782 --- 10.0.0.1 ping statistics --- 00:09:12.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.782 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:12.782 18:18:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.782 18:18:10 -- nvmf/common.sh@421 -- # return 0 00:09:12.782 18:18:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:12.782 18:18:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.782 18:18:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:12.782 18:18:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:12.782 18:18:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.782 18:18:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:12.782 18:18:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:12.782 18:18:10 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:12.782 18:18:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:12.782 18:18:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.782 18:18:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.782 18:18:10 -- nvmf/common.sh@469 -- # nvmfpid=72418 00:09:12.782 18:18:10 -- nvmf/common.sh@470 -- # waitforlisten 72418 00:09:12.782 18:18:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:12.782 18:18:10 -- common/autotest_common.sh@829 -- # '[' -z 72418 ']' 00:09:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.782 18:18:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.782 18:18:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.783 18:18:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.783 18:18:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.783 18:18:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.783 [2024-11-17 18:18:11.007524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:12.783 [2024-11-17 18:18:11.008100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.042 [2024-11-17 18:18:11.149771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.042 [2024-11-17 18:18:11.188525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.042 [2024-11-17 18:18:11.188720] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.042 [2024-11-17 18:18:11.188749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.042 [2024-11-17 18:18:11.188758] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.042 [2024-11-17 18:18:11.188781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.979 18:18:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.979 18:18:11 -- common/autotest_common.sh@862 -- # return 0 00:09:13.979 18:18:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:13.979 18:18:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.979 18:18:11 -- common/autotest_common.sh@10 -- # set +x 00:09:13.979 18:18:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.979 18:18:12 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.238 [2024-11-17 18:18:12.279264] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:14.238 18:18:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.238 18:18:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.238 18:18:12 -- common/autotest_common.sh@10 -- # set +x 00:09:14.238 ************************************ 00:09:14.238 START TEST lvs_grow_clean 00:09:14.238 ************************************ 00:09:14.238 18:18:12 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.238 18:18:12 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.497 18:18:12 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.497 18:18:12 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.756 18:18:12 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:14.756 18:18:12 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.756 18:18:12 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:15.016 18:18:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:15.016 18:18:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:15.016 18:18:13 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0aacf06-5d50-48ff-96a5-e46b18aa248a lvol 150 00:09:15.275 18:18:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f382ae47-06f7-4b04-a7c9-436ef3b410ae 00:09:15.275 18:18:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:15.275 18:18:13 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.534 [2024-11-17 18:18:13.770453] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.534 [2024-11-17 18:18:13.770547] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.534 true 00:09:15.534 18:18:13 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:15.534 18:18:13 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:16.103 18:18:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:16.103 18:18:14 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.103 18:18:14 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f382ae47-06f7-4b04-a7c9-436ef3b410ae 00:09:16.671 18:18:14 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.671 [2024-11-17 18:18:14.887384] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.671 18:18:14 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.931 18:18:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72506 00:09:16.931 18:18:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.931 18:18:15 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.931 18:18:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72506 /var/tmp/bdevperf.sock 00:09:16.931 18:18:15 -- common/autotest_common.sh@829 -- # '[' -z 72506 ']' 00:09:16.931 18:18:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.931 18:18:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.931 18:18:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.931 18:18:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.931 18:18:15 -- common/autotest_common.sh@10 -- # set +x 00:09:17.191 [2024-11-17 18:18:15.214605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:17.191 [2024-11-17 18:18:15.214680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72506 ] 00:09:17.191 [2024-11-17 18:18:15.352094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.191 [2024-11-17 18:18:15.393773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.128 18:18:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.128 18:18:16 -- common/autotest_common.sh@862 -- # return 0 00:09:18.128 18:18:16 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:18.387 Nvme0n1 00:09:18.387 18:18:16 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:18.646 [ 00:09:18.646 { 00:09:18.646 "name": "Nvme0n1", 00:09:18.646 "aliases": [ 00:09:18.646 "f382ae47-06f7-4b04-a7c9-436ef3b410ae" 00:09:18.646 ], 00:09:18.646 "product_name": "NVMe disk", 00:09:18.646 "block_size": 4096, 00:09:18.646 "num_blocks": 38912, 00:09:18.646 "uuid": "f382ae47-06f7-4b04-a7c9-436ef3b410ae", 00:09:18.646 "assigned_rate_limits": { 00:09:18.646 "rw_ios_per_sec": 0, 00:09:18.646 "rw_mbytes_per_sec": 0, 00:09:18.646 "r_mbytes_per_sec": 0, 00:09:18.646 "w_mbytes_per_sec": 0 00:09:18.646 }, 00:09:18.646 "claimed": false, 00:09:18.646 "zoned": false, 00:09:18.646 "supported_io_types": { 00:09:18.646 "read": true, 00:09:18.646 "write": true, 00:09:18.646 "unmap": true, 00:09:18.646 "write_zeroes": true, 00:09:18.646 "flush": true, 00:09:18.646 "reset": true, 00:09:18.646 "compare": true, 00:09:18.646 "compare_and_write": true, 00:09:18.646 "abort": true, 00:09:18.646 "nvme_admin": true, 00:09:18.646 "nvme_io": true 00:09:18.646 }, 00:09:18.646 "driver_specific": { 00:09:18.646 "nvme": [ 00:09:18.646 { 00:09:18.646 "trid": { 00:09:18.646 "trtype": "TCP", 00:09:18.646 "adrfam": "IPv4", 00:09:18.646 "traddr": "10.0.0.2", 00:09:18.646 "trsvcid": "4420", 00:09:18.646 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:18.646 }, 00:09:18.646 "ctrlr_data": { 00:09:18.646 "cntlid": 1, 00:09:18.646 "vendor_id": "0x8086", 00:09:18.646 "model_number": "SPDK bdev Controller", 00:09:18.646 "serial_number": "SPDK0", 00:09:18.646 "firmware_revision": "24.01.1", 00:09:18.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.646 "oacs": { 00:09:18.646 "security": 0, 00:09:18.646 "format": 0, 00:09:18.646 "firmware": 0, 00:09:18.646 "ns_manage": 0 00:09:18.646 }, 00:09:18.646 "multi_ctrlr": true, 00:09:18.646 "ana_reporting": false 00:09:18.646 }, 00:09:18.646 "vs": { 00:09:18.646 "nvme_version": "1.3" 00:09:18.646 }, 00:09:18.646 "ns_data": { 00:09:18.646 "id": 1, 00:09:18.646 "can_share": true 00:09:18.646 } 00:09:18.646 } 00:09:18.646 ], 00:09:18.646 "mp_policy": "active_passive" 00:09:18.646 } 00:09:18.646 } 00:09:18.646 ] 00:09:18.646 18:18:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72529 00:09:18.646 18:18:16 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.646 18:18:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:18.904 Running I/O for 10 seconds... 00:09:19.838 Latency(us) 00:09:19.838 [2024-11-17T18:18:18.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.838 [2024-11-17T18:18:18.105Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.838 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:19.838 [2024-11-17T18:18:18.105Z] =================================================================================================================== 00:09:19.838 [2024-11-17T18:18:18.105Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:19.838 00:09:20.775 18:18:18 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:20.775 [2024-11-17T18:18:19.042Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.775 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:20.775 [2024-11-17T18:18:19.042Z] =================================================================================================================== 00:09:20.775 [2024-11-17T18:18:19.042Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:20.775 00:09:21.034 true 00:09:21.034 18:18:19 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:21.034 18:18:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:21.293 18:18:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:21.293 18:18:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:21.293 18:18:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 72529 00:09:21.860 [2024-11-17T18:18:20.127Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.860 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:09:21.860 [2024-11-17T18:18:20.127Z] =================================================================================================================== 00:09:21.860 [2024-11-17T18:18:20.127Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:09:21.860 00:09:22.797 [2024-11-17T18:18:21.064Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.797 Nvme0n1 : 4.00 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:09:22.797 [2024-11-17T18:18:21.064Z] =================================================================================================================== 00:09:22.797 [2024-11-17T18:18:21.064Z] Total : 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:09:22.797 00:09:23.734 [2024-11-17T18:18:22.001Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.734 Nvme0n1 : 5.00 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:09:23.734 [2024-11-17T18:18:22.001Z] =================================================================================================================== 00:09:23.734 [2024-11-17T18:18:22.001Z] Total : 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:09:23.734 00:09:24.709 [2024-11-17T18:18:22.976Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.709 Nvme0n1 : 6.00 6498.17 25.38 0.00 0.00 0.00 0.00 0.00 00:09:24.709 [2024-11-17T18:18:22.976Z] =================================================================================================================== 00:09:24.709 [2024-11-17T18:18:22.976Z] Total : 6498.17 25.38 0.00 0.00 0.00 0.00 0.00 00:09:24.709 00:09:26.086 [2024-11-17T18:18:24.353Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.086 Nvme0n1 : 7.00 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:09:26.086 [2024-11-17T18:18:24.353Z] =================================================================================================================== 00:09:26.086 [2024-11-17T18:18:24.353Z] Total : 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:09:26.086 00:09:27.022 [2024-11-17T18:18:25.289Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.022 Nvme0n1 : 8.00 6461.12 25.24 0.00 0.00 0.00 0.00 0.00 00:09:27.022 [2024-11-17T18:18:25.289Z] =================================================================================================================== 00:09:27.022 [2024-11-17T18:18:25.289Z] Total : 6461.12 25.24 0.00 0.00 0.00 0.00 0.00 00:09:27.022 00:09:27.959 [2024-11-17T18:18:26.226Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.959 Nvme0n1 : 9.00 6462.89 25.25 0.00 0.00 0.00 0.00 0.00 00:09:27.959 [2024-11-17T18:18:26.226Z] =================================================================================================================== 00:09:27.959 [2024-11-17T18:18:26.226Z] Total : 6462.89 25.25 0.00 0.00 0.00 0.00 0.00 00:09:27.959 00:09:28.895 [2024-11-17T18:18:27.162Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.895 Nvme0n1 : 10.00 6464.30 25.25 0.00 0.00 0.00 0.00 0.00 00:09:28.895 [2024-11-17T18:18:27.162Z] =================================================================================================================== 00:09:28.895 [2024-11-17T18:18:27.162Z] Total : 6464.30 25.25 0.00 0.00 0.00 0.00 0.00 00:09:28.895 00:09:28.895 00:09:28.895 Latency(us) 00:09:28.895 [2024-11-17T18:18:27.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.895 [2024-11-17T18:18:27.162Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.895 Nvme0n1 : 10.02 6466.24 25.26 0.00 0.00 19790.28 17039.36 42657.98 00:09:28.895 [2024-11-17T18:18:27.162Z] =================================================================================================================== 00:09:28.895 [2024-11-17T18:18:27.162Z] Total : 6466.24 25.26 0.00 0.00 19790.28 17039.36 42657.98 00:09:28.895 0 00:09:28.895 18:18:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72506 00:09:28.895 18:18:26 -- common/autotest_common.sh@936 -- # '[' -z 72506 ']' 00:09:28.895 18:18:26 -- common/autotest_common.sh@940 -- # kill -0 72506 00:09:28.895 18:18:26 -- common/autotest_common.sh@941 -- # uname 00:09:28.895 18:18:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:28.895 18:18:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72506 00:09:28.895 18:18:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:28.895 18:18:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:28.895 killing process with pid 72506 00:09:28.895 18:18:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72506' 00:09:28.895 18:18:27 -- common/autotest_common.sh@955 -- # kill 72506 00:09:28.895 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.895 00:09:28.895 Latency(us) 00:09:28.895 [2024-11-17T18:18:27.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.895 [2024-11-17T18:18:27.162Z] =================================================================================================================== 00:09:28.895 [2024-11-17T18:18:27.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.895 18:18:27 -- common/autotest_common.sh@960 -- # wait 72506 00:09:29.154 18:18:27 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:29.413 18:18:27 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:29.413 18:18:27 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:29.672 18:18:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:29.672 18:18:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:29.672 18:18:27 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.672 [2024-11-17 18:18:27.919900] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.931 18:18:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:29.931 18:18:27 -- common/autotest_common.sh@650 -- # local es=0 00:09:29.931 18:18:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:29.931 18:18:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.931 18:18:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.931 18:18:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.931 18:18:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.931 18:18:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.931 18:18:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.931 18:18:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.931 18:18:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:29.931 18:18:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:29.931 request: 00:09:29.931 { 00:09:29.931 "uuid": "e0aacf06-5d50-48ff-96a5-e46b18aa248a", 00:09:29.931 "method": "bdev_lvol_get_lvstores", 00:09:29.931 "req_id": 1 00:09:29.931 } 00:09:29.931 Got JSON-RPC error response 00:09:29.931 response: 00:09:29.931 { 00:09:29.931 "code": -19, 00:09:29.931 "message": "No such device" 00:09:29.931 } 00:09:29.931 18:18:28 -- common/autotest_common.sh@653 -- # es=1 00:09:29.931 18:18:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:29.932 18:18:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:29.932 18:18:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:29.932 18:18:28 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.190 aio_bdev 00:09:30.190 18:18:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f382ae47-06f7-4b04-a7c9-436ef3b410ae 00:09:30.190 18:18:28 -- common/autotest_common.sh@897 -- # local bdev_name=f382ae47-06f7-4b04-a7c9-436ef3b410ae 00:09:30.190 18:18:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:30.190 18:18:28 -- common/autotest_common.sh@899 -- # local i 00:09:30.190 18:18:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:30.190 18:18:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:30.190 18:18:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:30.450 18:18:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f382ae47-06f7-4b04-a7c9-436ef3b410ae -t 2000 00:09:30.709 [ 00:09:30.709 { 00:09:30.709 "name": "f382ae47-06f7-4b04-a7c9-436ef3b410ae", 00:09:30.709 "aliases": [ 00:09:30.709 "lvs/lvol" 00:09:30.709 ], 00:09:30.709 "product_name": "Logical Volume", 00:09:30.709 "block_size": 4096, 00:09:30.709 "num_blocks": 38912, 00:09:30.709 "uuid": "f382ae47-06f7-4b04-a7c9-436ef3b410ae", 00:09:30.709 "assigned_rate_limits": { 00:09:30.709 "rw_ios_per_sec": 0, 00:09:30.709 "rw_mbytes_per_sec": 0, 00:09:30.709 "r_mbytes_per_sec": 0, 00:09:30.709 "w_mbytes_per_sec": 0 00:09:30.709 }, 00:09:30.709 "claimed": false, 00:09:30.709 "zoned": false, 00:09:30.709 "supported_io_types": { 00:09:30.709 "read": true, 00:09:30.709 "write": true, 00:09:30.709 "unmap": true, 00:09:30.709 "write_zeroes": true, 00:09:30.709 "flush": false, 00:09:30.709 "reset": true, 00:09:30.709 "compare": false, 00:09:30.709 "compare_and_write": false, 00:09:30.709 "abort": false, 00:09:30.709 "nvme_admin": false, 00:09:30.709 "nvme_io": false 00:09:30.709 }, 00:09:30.709 "driver_specific": { 00:09:30.709 "lvol": { 00:09:30.709 "lvol_store_uuid": "e0aacf06-5d50-48ff-96a5-e46b18aa248a", 00:09:30.709 "base_bdev": "aio_bdev", 00:09:30.709 "thin_provision": false, 00:09:30.709 "snapshot": false, 00:09:30.709 "clone": false, 00:09:30.709 "esnap_clone": false 00:09:30.709 } 00:09:30.709 } 00:09:30.709 } 00:09:30.709 ] 00:09:30.709 18:18:28 -- common/autotest_common.sh@905 -- # return 0 00:09:30.709 18:18:28 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:30.709 18:18:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:30.967 18:18:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:30.968 18:18:29 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:30.968 18:18:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:31.225 18:18:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:31.225 18:18:29 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f382ae47-06f7-4b04-a7c9-436ef3b410ae 00:09:31.483 18:18:29 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0aacf06-5d50-48ff-96a5-e46b18aa248a 00:09:31.742 18:18:29 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.001 18:18:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:32.260 ************************************ 00:09:32.260 END TEST lvs_grow_clean 00:09:32.260 ************************************ 00:09:32.260 00:09:32.260 real 0m18.202s 00:09:32.260 user 0m17.406s 00:09:32.260 sys 0m2.389s 00:09:32.260 18:18:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.260 18:18:30 -- common/autotest_common.sh@10 -- # set +x 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:32.519 18:18:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:32.519 18:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.519 18:18:30 -- common/autotest_common.sh@10 -- # set +x 00:09:32.519 ************************************ 00:09:32.519 START TEST lvs_grow_dirty 00:09:32.519 ************************************ 00:09:32.519 18:18:30 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:32.519 18:18:30 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.778 18:18:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:32.778 18:18:30 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:33.036 18:18:31 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2c54d31-362b-425d-ba51-6972b776d41e 00:09:33.036 18:18:31 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:33.036 18:18:31 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:33.296 18:18:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:33.296 18:18:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:33.296 18:18:31 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b2c54d31-362b-425d-ba51-6972b776d41e lvol 150 00:09:33.296 18:18:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:33.296 18:18:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.296 18:18:31 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:33.555 [2024-11-17 18:18:31.793233] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:33.555 [2024-11-17 18:18:31.793352] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:33.555 true 00:09:33.555 18:18:31 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:33.555 18:18:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:33.814 18:18:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:33.814 18:18:32 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:34.072 18:18:32 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:34.331 18:18:32 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:34.589 18:18:32 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:34.848 18:18:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72773 00:09:34.848 18:18:32 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:34.848 18:18:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:34.848 18:18:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72773 /var/tmp/bdevperf.sock 00:09:34.848 18:18:32 -- common/autotest_common.sh@829 -- # '[' -z 72773 ']' 00:09:34.848 18:18:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:34.848 18:18:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.848 18:18:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:34.848 18:18:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.848 18:18:32 -- common/autotest_common.sh@10 -- # set +x 00:09:34.848 [2024-11-17 18:18:33.023431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:34.848 [2024-11-17 18:18:33.023763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72773 ] 00:09:35.106 [2024-11-17 18:18:33.161240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.106 [2024-11-17 18:18:33.200415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.673 18:18:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.673 18:18:33 -- common/autotest_common.sh@862 -- # return 0 00:09:35.673 18:18:33 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:36.241 Nvme0n1 00:09:36.241 18:18:34 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:36.241 [ 00:09:36.241 { 00:09:36.241 "name": "Nvme0n1", 00:09:36.241 "aliases": [ 00:09:36.241 "c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99" 00:09:36.241 ], 00:09:36.241 "product_name": "NVMe disk", 00:09:36.241 "block_size": 4096, 00:09:36.241 "num_blocks": 38912, 00:09:36.241 "uuid": "c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99", 00:09:36.241 "assigned_rate_limits": { 00:09:36.241 "rw_ios_per_sec": 0, 00:09:36.241 "rw_mbytes_per_sec": 0, 00:09:36.241 "r_mbytes_per_sec": 0, 00:09:36.241 "w_mbytes_per_sec": 0 00:09:36.241 }, 00:09:36.241 "claimed": false, 00:09:36.241 "zoned": false, 00:09:36.241 "supported_io_types": { 00:09:36.241 "read": true, 00:09:36.241 "write": true, 00:09:36.241 "unmap": true, 00:09:36.241 "write_zeroes": true, 00:09:36.241 "flush": true, 00:09:36.241 "reset": true, 00:09:36.241 "compare": true, 00:09:36.241 "compare_and_write": true, 00:09:36.241 "abort": true, 00:09:36.241 "nvme_admin": true, 00:09:36.241 "nvme_io": true 00:09:36.241 }, 00:09:36.241 "driver_specific": { 00:09:36.241 "nvme": [ 00:09:36.241 { 00:09:36.241 "trid": { 00:09:36.241 "trtype": "TCP", 00:09:36.241 "adrfam": "IPv4", 00:09:36.241 "traddr": "10.0.0.2", 00:09:36.241 "trsvcid": "4420", 00:09:36.241 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:36.241 }, 00:09:36.241 "ctrlr_data": { 00:09:36.242 "cntlid": 1, 00:09:36.242 "vendor_id": "0x8086", 00:09:36.242 "model_number": "SPDK bdev Controller", 00:09:36.242 "serial_number": "SPDK0", 00:09:36.242 "firmware_revision": "24.01.1", 00:09:36.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:36.242 "oacs": { 00:09:36.242 "security": 0, 00:09:36.242 "format": 0, 00:09:36.242 "firmware": 0, 00:09:36.242 "ns_manage": 0 00:09:36.242 }, 00:09:36.242 "multi_ctrlr": true, 00:09:36.242 "ana_reporting": false 00:09:36.242 }, 00:09:36.242 "vs": { 00:09:36.242 "nvme_version": "1.3" 00:09:36.242 }, 00:09:36.242 "ns_data": { 00:09:36.242 "id": 1, 00:09:36.242 "can_share": true 00:09:36.242 } 00:09:36.242 } 00:09:36.242 ], 00:09:36.242 "mp_policy": "active_passive" 00:09:36.242 } 00:09:36.242 } 00:09:36.242 ] 00:09:36.242 18:18:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72797 00:09:36.242 18:18:34 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.242 18:18:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:36.501 Running I/O for 10 seconds... 00:09:37.450 Latency(us) 00:09:37.450 [2024-11-17T18:18:35.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.450 [2024-11-17T18:18:35.717Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.450 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:37.450 [2024-11-17T18:18:35.717Z] =================================================================================================================== 00:09:37.450 [2024-11-17T18:18:35.717Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:37.450 00:09:38.410 18:18:36 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:38.410 [2024-11-17T18:18:36.677Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.410 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:38.410 [2024-11-17T18:18:36.677Z] =================================================================================================================== 00:09:38.410 [2024-11-17T18:18:36.677Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:38.410 00:09:38.669 true 00:09:38.670 18:18:36 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:38.670 18:18:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:38.928 18:18:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:38.928 18:18:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:38.928 18:18:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 72797 00:09:39.496 [2024-11-17T18:18:37.763Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.496 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:39.496 [2024-11-17T18:18:37.763Z] =================================================================================================================== 00:09:39.496 [2024-11-17T18:18:37.763Z] Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:39.496 00:09:40.434 [2024-11-17T18:18:38.701Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.434 Nvme0n1 : 4.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:40.434 [2024-11-17T18:18:38.701Z] =================================================================================================================== 00:09:40.434 [2024-11-17T18:18:38.701Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:40.434 00:09:41.371 [2024-11-17T18:18:39.638Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.371 Nvme0n1 : 5.00 6633.80 25.91 0.00 0.00 0.00 0.00 0.00 00:09:41.371 [2024-11-17T18:18:39.638Z] =================================================================================================================== 00:09:41.371 [2024-11-17T18:18:39.638Z] Total : 6633.80 25.91 0.00 0.00 0.00 0.00 0.00 00:09:41.371 00:09:42.749 [2024-11-17T18:18:41.016Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.749 Nvme0n1 : 6.00 6565.33 25.65 0.00 0.00 0.00 0.00 0.00 00:09:42.749 [2024-11-17T18:18:41.016Z] =================================================================================================================== 00:09:42.749 [2024-11-17T18:18:41.016Z] Total : 6565.33 25.65 0.00 0.00 0.00 0.00 0.00 00:09:42.749 00:09:43.686 [2024-11-17T18:18:41.953Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.686 Nvme0n1 : 7.00 6534.57 25.53 0.00 0.00 0.00 0.00 0.00 00:09:43.686 [2024-11-17T18:18:41.953Z] =================================================================================================================== 00:09:43.686 [2024-11-17T18:18:41.953Z] Total : 6534.57 25.53 0.00 0.00 0.00 0.00 0.00 00:09:43.686 00:09:44.623 [2024-11-17T18:18:42.890Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.623 Nvme0n1 : 8.00 6483.62 25.33 0.00 0.00 0.00 0.00 0.00 00:09:44.623 [2024-11-17T18:18:42.890Z] =================================================================================================================== 00:09:44.623 [2024-11-17T18:18:42.890Z] Total : 6483.62 25.33 0.00 0.00 0.00 0.00 0.00 00:09:44.623 00:09:45.560 [2024-11-17T18:18:43.827Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.560 Nvme0n1 : 9.00 6482.89 25.32 0.00 0.00 0.00 0.00 0.00 00:09:45.560 [2024-11-17T18:18:43.827Z] =================================================================================================================== 00:09:45.560 [2024-11-17T18:18:43.827Z] Total : 6482.89 25.32 0.00 0.00 0.00 0.00 0.00 00:09:45.560 00:09:46.497 [2024-11-17T18:18:44.764Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.497 Nvme0n1 : 10.00 6482.30 25.32 0.00 0.00 0.00 0.00 0.00 00:09:46.497 [2024-11-17T18:18:44.764Z] =================================================================================================================== 00:09:46.497 [2024-11-17T18:18:44.764Z] Total : 6482.30 25.32 0.00 0.00 0.00 0.00 0.00 00:09:46.497 00:09:46.497 00:09:46.497 Latency(us) 00:09:46.497 [2024-11-17T18:18:44.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.497 [2024-11-17T18:18:44.764Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.497 Nvme0n1 : 10.02 6482.76 25.32 0.00 0.00 19739.53 7298.33 69587.32 00:09:46.497 [2024-11-17T18:18:44.764Z] =================================================================================================================== 00:09:46.497 [2024-11-17T18:18:44.764Z] Total : 6482.76 25.32 0.00 0.00 19739.53 7298.33 69587.32 00:09:46.497 0 00:09:46.497 18:18:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72773 00:09:46.497 18:18:44 -- common/autotest_common.sh@936 -- # '[' -z 72773 ']' 00:09:46.497 18:18:44 -- common/autotest_common.sh@940 -- # kill -0 72773 00:09:46.497 18:18:44 -- common/autotest_common.sh@941 -- # uname 00:09:46.497 18:18:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.497 18:18:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72773 00:09:46.497 killing process with pid 72773 00:09:46.497 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.497 00:09:46.497 Latency(us) 00:09:46.497 [2024-11-17T18:18:44.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.497 [2024-11-17T18:18:44.764Z] =================================================================================================================== 00:09:46.497 [2024-11-17T18:18:44.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.497 18:18:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:46.497 18:18:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:46.497 18:18:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72773' 00:09:46.498 18:18:44 -- common/autotest_common.sh@955 -- # kill 72773 00:09:46.498 18:18:44 -- common/autotest_common.sh@960 -- # wait 72773 00:09:46.757 18:18:44 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:47.016 18:18:45 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:47.016 18:18:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:47.276 18:18:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:47.276 18:18:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:47.276 18:18:45 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72418 00:09:47.276 18:18:45 -- target/nvmf_lvs_grow.sh@74 -- # wait 72418 00:09:47.276 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72418 Killed "${NVMF_APP[@]}" "$@" 00:09:47.276 18:18:45 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:47.276 18:18:45 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:47.276 18:18:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:47.276 18:18:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.276 18:18:45 -- common/autotest_common.sh@10 -- # set +x 00:09:47.276 18:18:45 -- nvmf/common.sh@469 -- # nvmfpid=72923 00:09:47.276 18:18:45 -- nvmf/common.sh@470 -- # waitforlisten 72923 00:09:47.276 18:18:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:47.276 18:18:45 -- common/autotest_common.sh@829 -- # '[' -z 72923 ']' 00:09:47.276 18:18:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.276 18:18:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.276 18:18:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.276 18:18:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.276 18:18:45 -- common/autotest_common.sh@10 -- # set +x 00:09:47.276 [2024-11-17 18:18:45.425862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:47.276 [2024-11-17 18:18:45.426211] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.535 [2024-11-17 18:18:45.561511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.535 [2024-11-17 18:18:45.595106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.535 [2024-11-17 18:18:45.595508] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.535 [2024-11-17 18:18:45.595532] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.535 [2024-11-17 18:18:45.595542] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.535 [2024-11-17 18:18:45.595575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.473 18:18:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.473 18:18:46 -- common/autotest_common.sh@862 -- # return 0 00:09:48.473 18:18:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:48.473 18:18:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.473 18:18:46 -- common/autotest_common.sh@10 -- # set +x 00:09:48.473 18:18:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.473 18:18:46 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:48.473 [2024-11-17 18:18:46.666950] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:48.473 [2024-11-17 18:18:46.667205] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:48.473 [2024-11-17 18:18:46.667483] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:48.473 18:18:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:48.473 18:18:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:48.473 18:18:46 -- common/autotest_common.sh@897 -- # local bdev_name=c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:48.473 18:18:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:48.473 18:18:46 -- common/autotest_common.sh@899 -- # local i 00:09:48.473 18:18:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:48.473 18:18:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:48.473 18:18:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:48.731 18:18:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 -t 2000 00:09:48.990 [ 00:09:48.990 { 00:09:48.990 "name": "c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99", 00:09:48.990 "aliases": [ 00:09:48.990 "lvs/lvol" 00:09:48.990 ], 00:09:48.990 "product_name": "Logical Volume", 00:09:48.990 "block_size": 4096, 00:09:48.990 "num_blocks": 38912, 00:09:48.990 "uuid": "c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99", 00:09:48.990 "assigned_rate_limits": { 00:09:48.990 "rw_ios_per_sec": 0, 00:09:48.990 "rw_mbytes_per_sec": 0, 00:09:48.990 "r_mbytes_per_sec": 0, 00:09:48.990 "w_mbytes_per_sec": 0 00:09:48.990 }, 00:09:48.990 "claimed": false, 00:09:48.990 "zoned": false, 00:09:48.990 "supported_io_types": { 00:09:48.990 "read": true, 00:09:48.990 "write": true, 00:09:48.990 "unmap": true, 00:09:48.990 "write_zeroes": true, 00:09:48.990 "flush": false, 00:09:48.990 "reset": true, 00:09:48.990 "compare": false, 00:09:48.990 "compare_and_write": false, 00:09:48.990 "abort": false, 00:09:48.990 "nvme_admin": false, 00:09:48.990 "nvme_io": false 00:09:48.990 }, 00:09:48.990 "driver_specific": { 00:09:48.990 "lvol": { 00:09:48.990 "lvol_store_uuid": "b2c54d31-362b-425d-ba51-6972b776d41e", 00:09:48.990 "base_bdev": "aio_bdev", 00:09:48.990 "thin_provision": false, 00:09:48.990 "snapshot": false, 00:09:48.990 "clone": false, 00:09:48.990 "esnap_clone": false 00:09:48.990 } 00:09:48.990 } 00:09:48.990 } 00:09:48.990 ] 00:09:48.990 18:18:47 -- common/autotest_common.sh@905 -- # return 0 00:09:48.990 18:18:47 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:48.990 18:18:47 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:49.250 18:18:47 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:49.250 18:18:47 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:49.250 18:18:47 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:49.819 18:18:47 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:49.819 18:18:47 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:49.819 [2024-11-17 18:18:47.968555] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:49.819 18:18:48 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:49.819 18:18:48 -- common/autotest_common.sh@650 -- # local es=0 00:09:49.819 18:18:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:49.819 18:18:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.819 18:18:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.819 18:18:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.819 18:18:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.819 18:18:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.819 18:18:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.819 18:18:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.819 18:18:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:49.819 18:18:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:50.078 request: 00:09:50.078 { 00:09:50.078 "uuid": "b2c54d31-362b-425d-ba51-6972b776d41e", 00:09:50.078 "method": "bdev_lvol_get_lvstores", 00:09:50.078 "req_id": 1 00:09:50.078 } 00:09:50.078 Got JSON-RPC error response 00:09:50.078 response: 00:09:50.078 { 00:09:50.078 "code": -19, 00:09:50.078 "message": "No such device" 00:09:50.078 } 00:09:50.078 18:18:48 -- common/autotest_common.sh@653 -- # es=1 00:09:50.078 18:18:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.078 18:18:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:50.078 18:18:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.078 18:18:48 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.337 aio_bdev 00:09:50.337 18:18:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:50.337 18:18:48 -- common/autotest_common.sh@897 -- # local bdev_name=c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:50.337 18:18:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:50.337 18:18:48 -- common/autotest_common.sh@899 -- # local i 00:09:50.337 18:18:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:50.337 18:18:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:50.337 18:18:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.597 18:18:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 -t 2000 00:09:50.597 [ 00:09:50.597 { 00:09:50.597 "name": "c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99", 00:09:50.597 "aliases": [ 00:09:50.597 "lvs/lvol" 00:09:50.597 ], 00:09:50.597 "product_name": "Logical Volume", 00:09:50.597 "block_size": 4096, 00:09:50.597 "num_blocks": 38912, 00:09:50.597 "uuid": "c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99", 00:09:50.597 "assigned_rate_limits": { 00:09:50.597 "rw_ios_per_sec": 0, 00:09:50.597 "rw_mbytes_per_sec": 0, 00:09:50.597 "r_mbytes_per_sec": 0, 00:09:50.597 "w_mbytes_per_sec": 0 00:09:50.597 }, 00:09:50.597 "claimed": false, 00:09:50.597 "zoned": false, 00:09:50.597 "supported_io_types": { 00:09:50.597 "read": true, 00:09:50.597 "write": true, 00:09:50.597 "unmap": true, 00:09:50.597 "write_zeroes": true, 00:09:50.597 "flush": false, 00:09:50.597 "reset": true, 00:09:50.597 "compare": false, 00:09:50.597 "compare_and_write": false, 00:09:50.597 "abort": false, 00:09:50.597 "nvme_admin": false, 00:09:50.597 "nvme_io": false 00:09:50.597 }, 00:09:50.597 "driver_specific": { 00:09:50.597 "lvol": { 00:09:50.597 "lvol_store_uuid": "b2c54d31-362b-425d-ba51-6972b776d41e", 00:09:50.597 "base_bdev": "aio_bdev", 00:09:50.597 "thin_provision": false, 00:09:50.597 "snapshot": false, 00:09:50.597 "clone": false, 00:09:50.597 "esnap_clone": false 00:09:50.597 } 00:09:50.597 } 00:09:50.597 } 00:09:50.597 ] 00:09:50.597 18:18:48 -- common/autotest_common.sh@905 -- # return 0 00:09:50.597 18:18:48 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:50.597 18:18:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:50.856 18:18:49 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:50.856 18:18:49 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:50.856 18:18:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:51.129 18:18:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:51.129 18:18:49 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c31f7dfc-2cce-4c39-8ba2-0d06d44a7a99 00:09:51.389 18:18:49 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2c54d31-362b-425d-ba51-6972b776d41e 00:09:51.648 18:18:49 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.910 18:18:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:52.512 ************************************ 00:09:52.512 END TEST lvs_grow_dirty 00:09:52.512 ************************************ 00:09:52.512 00:09:52.512 real 0m19.892s 00:09:52.512 user 0m40.503s 00:09:52.512 sys 0m9.087s 00:09:52.512 18:18:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.512 18:18:50 -- common/autotest_common.sh@10 -- # set +x 00:09:52.512 18:18:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:52.512 18:18:50 -- common/autotest_common.sh@806 -- # type=--id 00:09:52.512 18:18:50 -- common/autotest_common.sh@807 -- # id=0 00:09:52.512 18:18:50 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:52.512 18:18:50 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:52.512 18:18:50 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:52.512 18:18:50 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:52.512 18:18:50 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:52.512 18:18:50 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:52.512 nvmf_trace.0 00:09:52.512 18:18:50 -- common/autotest_common.sh@821 -- # return 0 00:09:52.512 18:18:50 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:52.512 18:18:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:52.512 18:18:50 -- nvmf/common.sh@116 -- # sync 00:09:52.512 18:18:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:52.512 18:18:50 -- nvmf/common.sh@119 -- # set +e 00:09:52.512 18:18:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:52.512 18:18:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:52.512 rmmod nvme_tcp 00:09:52.512 rmmod nvme_fabrics 00:09:52.512 rmmod nvme_keyring 00:09:52.512 18:18:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:52.512 18:18:50 -- nvmf/common.sh@123 -- # set -e 00:09:52.512 18:18:50 -- nvmf/common.sh@124 -- # return 0 00:09:52.512 18:18:50 -- nvmf/common.sh@477 -- # '[' -n 72923 ']' 00:09:52.512 18:18:50 -- nvmf/common.sh@478 -- # killprocess 72923 00:09:52.512 18:18:50 -- common/autotest_common.sh@936 -- # '[' -z 72923 ']' 00:09:52.512 18:18:50 -- common/autotest_common.sh@940 -- # kill -0 72923 00:09:52.512 18:18:50 -- common/autotest_common.sh@941 -- # uname 00:09:52.512 18:18:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.512 18:18:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72923 00:09:52.770 killing process with pid 72923 00:09:52.770 18:18:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.770 18:18:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.770 18:18:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72923' 00:09:52.770 18:18:50 -- common/autotest_common.sh@955 -- # kill 72923 00:09:52.770 18:18:50 -- common/autotest_common.sh@960 -- # wait 72923 00:09:52.770 18:18:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:52.770 18:18:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:52.770 18:18:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:52.771 18:18:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.771 18:18:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:52.771 18:18:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.771 18:18:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.771 18:18:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.771 18:18:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:52.771 ************************************ 00:09:52.771 END TEST nvmf_lvs_grow 00:09:52.771 ************************************ 00:09:52.771 00:09:52.771 real 0m40.559s 00:09:52.771 user 1m4.184s 00:09:52.771 sys 0m12.109s 00:09:52.771 18:18:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.771 18:18:50 -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 18:18:51 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:52.771 18:18:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:52.771 18:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.771 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 ************************************ 00:09:52.771 START TEST nvmf_bdev_io_wait 00:09:52.771 ************************************ 00:09:52.771 18:18:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:53.030 * Looking for test storage... 00:09:53.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.030 18:18:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:53.030 18:18:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:53.030 18:18:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:53.030 18:18:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:53.030 18:18:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:53.030 18:18:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:53.030 18:18:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:53.030 18:18:51 -- scripts/common.sh@335 -- # IFS=.-: 00:09:53.030 18:18:51 -- scripts/common.sh@335 -- # read -ra ver1 00:09:53.030 18:18:51 -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.030 18:18:51 -- scripts/common.sh@336 -- # read -ra ver2 00:09:53.030 18:18:51 -- scripts/common.sh@337 -- # local 'op=<' 00:09:53.030 18:18:51 -- scripts/common.sh@339 -- # ver1_l=2 00:09:53.030 18:18:51 -- scripts/common.sh@340 -- # ver2_l=1 00:09:53.030 18:18:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:53.030 18:18:51 -- scripts/common.sh@343 -- # case "$op" in 00:09:53.030 18:18:51 -- scripts/common.sh@344 -- # : 1 00:09:53.030 18:18:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:53.030 18:18:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.030 18:18:51 -- scripts/common.sh@364 -- # decimal 1 00:09:53.030 18:18:51 -- scripts/common.sh@352 -- # local d=1 00:09:53.030 18:18:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.030 18:18:51 -- scripts/common.sh@354 -- # echo 1 00:09:53.030 18:18:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:53.030 18:18:51 -- scripts/common.sh@365 -- # decimal 2 00:09:53.030 18:18:51 -- scripts/common.sh@352 -- # local d=2 00:09:53.030 18:18:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.030 18:18:51 -- scripts/common.sh@354 -- # echo 2 00:09:53.030 18:18:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:53.030 18:18:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:53.030 18:18:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:53.030 18:18:51 -- scripts/common.sh@367 -- # return 0 00:09:53.030 18:18:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.030 18:18:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:53.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.030 --rc genhtml_branch_coverage=1 00:09:53.030 --rc genhtml_function_coverage=1 00:09:53.030 --rc genhtml_legend=1 00:09:53.030 --rc geninfo_all_blocks=1 00:09:53.030 --rc geninfo_unexecuted_blocks=1 00:09:53.030 00:09:53.030 ' 00:09:53.030 18:18:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:53.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.030 --rc genhtml_branch_coverage=1 00:09:53.030 --rc genhtml_function_coverage=1 00:09:53.030 --rc genhtml_legend=1 00:09:53.030 --rc geninfo_all_blocks=1 00:09:53.030 --rc geninfo_unexecuted_blocks=1 00:09:53.030 00:09:53.030 ' 00:09:53.030 18:18:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:53.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.030 --rc genhtml_branch_coverage=1 00:09:53.030 --rc genhtml_function_coverage=1 00:09:53.030 --rc genhtml_legend=1 00:09:53.030 --rc geninfo_all_blocks=1 00:09:53.030 --rc geninfo_unexecuted_blocks=1 00:09:53.030 00:09:53.030 ' 00:09:53.030 18:18:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:53.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.030 --rc genhtml_branch_coverage=1 00:09:53.030 --rc genhtml_function_coverage=1 00:09:53.030 --rc genhtml_legend=1 00:09:53.030 --rc geninfo_all_blocks=1 00:09:53.030 --rc geninfo_unexecuted_blocks=1 00:09:53.030 00:09:53.030 ' 00:09:53.030 18:18:51 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.030 18:18:51 -- nvmf/common.sh@7 -- # uname -s 00:09:53.030 18:18:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.030 18:18:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.030 18:18:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.030 18:18:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.030 18:18:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.030 18:18:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.030 18:18:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.030 18:18:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.030 18:18:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.030 18:18:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.030 18:18:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:09:53.030 18:18:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:09:53.030 18:18:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.030 18:18:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.030 18:18:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.030 18:18:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.030 18:18:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.030 18:18:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.030 18:18:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.030 18:18:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.030 18:18:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.030 18:18:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.030 18:18:51 -- paths/export.sh@5 -- # export PATH 00:09:53.030 18:18:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.030 18:18:51 -- nvmf/common.sh@46 -- # : 0 00:09:53.030 18:18:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:53.030 18:18:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:53.030 18:18:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:53.030 18:18:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.030 18:18:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.030 18:18:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:53.030 18:18:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:53.030 18:18:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:53.030 18:18:51 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.030 18:18:51 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.030 18:18:51 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:53.030 18:18:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:53.030 18:18:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.030 18:18:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:53.030 18:18:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:53.030 18:18:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:53.030 18:18:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.030 18:18:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.030 18:18:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.030 18:18:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:53.030 18:18:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:53.030 18:18:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:53.030 18:18:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:53.030 18:18:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:53.030 18:18:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:53.030 18:18:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.030 18:18:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.030 18:18:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.030 18:18:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:53.030 18:18:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.030 18:18:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.030 18:18:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.030 18:18:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.030 18:18:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.030 18:18:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.030 18:18:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.030 18:18:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.030 18:18:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:53.030 18:18:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:53.030 Cannot find device "nvmf_tgt_br" 00:09:53.030 18:18:51 -- nvmf/common.sh@154 -- # true 00:09:53.030 18:18:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.030 Cannot find device "nvmf_tgt_br2" 00:09:53.030 18:18:51 -- nvmf/common.sh@155 -- # true 00:09:53.030 18:18:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:53.030 18:18:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:53.030 Cannot find device "nvmf_tgt_br" 00:09:53.030 18:18:51 -- nvmf/common.sh@157 -- # true 00:09:53.030 18:18:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:53.030 Cannot find device "nvmf_tgt_br2" 00:09:53.030 18:18:51 -- nvmf/common.sh@158 -- # true 00:09:53.030 18:18:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:53.289 18:18:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:53.289 18:18:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.289 18:18:51 -- nvmf/common.sh@161 -- # true 00:09:53.289 18:18:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.289 18:18:51 -- nvmf/common.sh@162 -- # true 00:09:53.289 18:18:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.289 18:18:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.289 18:18:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.289 18:18:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.289 18:18:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.289 18:18:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.290 18:18:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.290 18:18:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.290 18:18:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.290 18:18:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:53.290 18:18:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:53.290 18:18:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:53.290 18:18:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:53.290 18:18:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.290 18:18:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.290 18:18:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.290 18:18:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:53.290 18:18:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:53.290 18:18:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.290 18:18:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.290 18:18:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.290 18:18:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.290 18:18:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.290 18:18:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:53.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:53.290 00:09:53.290 --- 10.0.0.2 ping statistics --- 00:09:53.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.290 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:53.290 18:18:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:53.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:53.290 00:09:53.290 --- 10.0.0.3 ping statistics --- 00:09:53.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.290 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:53.290 18:18:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:53.290 00:09:53.290 --- 10.0.0.1 ping statistics --- 00:09:53.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.290 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:53.290 18:18:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.290 18:18:51 -- nvmf/common.sh@421 -- # return 0 00:09:53.290 18:18:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:53.290 18:18:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.290 18:18:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:53.290 18:18:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:53.290 18:18:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.290 18:18:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:53.290 18:18:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:53.290 18:18:51 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:53.290 18:18:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:53.290 18:18:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.290 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.290 18:18:51 -- nvmf/common.sh@469 -- # nvmfpid=73244 00:09:53.290 18:18:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:53.290 18:18:51 -- nvmf/common.sh@470 -- # waitforlisten 73244 00:09:53.290 18:18:51 -- common/autotest_common.sh@829 -- # '[' -z 73244 ']' 00:09:53.290 18:18:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.290 18:18:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.290 18:18:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.290 18:18:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.290 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.549 [2024-11-17 18:18:51.599043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:53.549 [2024-11-17 18:18:51.599133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.549 [2024-11-17 18:18:51.739910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.549 [2024-11-17 18:18:51.784342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.549 [2024-11-17 18:18:51.784696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.549 [2024-11-17 18:18:51.784888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.549 [2024-11-17 18:18:51.785047] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.549 [2024-11-17 18:18:51.785324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.549 [2024-11-17 18:18:51.785456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.549 [2024-11-17 18:18:51.785539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.549 [2024-11-17 18:18:51.785539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.808 18:18:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.808 18:18:51 -- common/autotest_common.sh@862 -- # return 0 00:09:53.808 18:18:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:53.808 18:18:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.808 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 18:18:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.808 18:18:51 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:53.808 18:18:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 18:18:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:51 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:53.808 18:18:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 18:18:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:51 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.808 18:18:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 [2024-11-17 18:18:51.952530] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.808 18:18:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:51 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.808 18:18:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 Malloc0 00:09:53.808 18:18:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:51 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.808 18:18:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 18:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.808 18:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:52 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 18:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.808 18:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.808 18:18:52 -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 [2024-11-17 18:18:52.013418] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.808 18:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73272 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@30 -- # READ_PID=73274 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73276 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # config=() 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.808 18:18:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:53.808 18:18:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:53.808 { 00:09:53.808 "params": { 00:09:53.808 "name": "Nvme$subsystem", 00:09:53.808 "trtype": "$TEST_TRANSPORT", 00:09:53.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.808 "adrfam": "ipv4", 00:09:53.808 "trsvcid": "$NVMF_PORT", 00:09:53.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.808 "hdgst": ${hdgst:-false}, 00:09:53.808 "ddgst": ${ddgst:-false} 00:09:53.808 }, 00:09:53.808 "method": "bdev_nvme_attach_controller" 00:09:53.808 } 00:09:53.808 EOF 00:09:53.808 )") 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # config=() 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.808 18:18:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:53.808 18:18:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:53.808 { 00:09:53.808 "params": { 00:09:53.808 "name": "Nvme$subsystem", 00:09:53.808 "trtype": "$TEST_TRANSPORT", 00:09:53.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.808 "adrfam": "ipv4", 00:09:53.808 "trsvcid": "$NVMF_PORT", 00:09:53.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.808 "hdgst": ${hdgst:-false}, 00:09:53.808 "ddgst": ${ddgst:-false} 00:09:53.808 }, 00:09:53.808 "method": "bdev_nvme_attach_controller" 00:09:53.808 } 00:09:53.808 EOF 00:09:53.808 )") 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.808 18:18:52 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.808 18:18:52 -- nvmf/common.sh@542 -- # cat 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # config=() 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:53.808 18:18:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:53.808 18:18:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:53.808 { 00:09:53.808 "params": { 00:09:53.808 "name": "Nvme$subsystem", 00:09:53.808 "trtype": "$TEST_TRANSPORT", 00:09:53.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.808 "adrfam": "ipv4", 00:09:53.808 "trsvcid": "$NVMF_PORT", 00:09:53.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.808 "hdgst": ${hdgst:-false}, 00:09:53.808 "ddgst": ${ddgst:-false} 00:09:53.808 }, 00:09:53.808 "method": "bdev_nvme_attach_controller" 00:09:53.808 } 00:09:53.808 EOF 00:09:53.808 )") 00:09:53.808 18:18:52 -- nvmf/common.sh@542 -- # cat 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # config=() 00:09:53.808 18:18:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:53.808 18:18:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:53.808 18:18:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:53.808 { 00:09:53.808 "params": { 00:09:53.808 "name": "Nvme$subsystem", 00:09:53.808 "trtype": "$TEST_TRANSPORT", 00:09:53.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.808 "adrfam": "ipv4", 00:09:53.808 "trsvcid": "$NVMF_PORT", 00:09:53.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.808 "hdgst": ${hdgst:-false}, 00:09:53.808 "ddgst": ${ddgst:-false} 00:09:53.808 }, 00:09:53.809 "method": "bdev_nvme_attach_controller" 00:09:53.809 } 00:09:53.809 EOF 00:09:53.809 )") 00:09:53.809 18:18:52 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73278 00:09:53.809 18:18:52 -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.809 18:18:52 -- nvmf/common.sh@542 -- # cat 00:09:53.809 18:18:52 -- nvmf/common.sh@544 -- # jq . 00:09:53.809 18:18:52 -- nvmf/common.sh@544 -- # jq . 00:09:53.809 18:18:52 -- nvmf/common.sh@542 -- # cat 00:09:53.809 18:18:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:53.809 18:18:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:53.809 "params": { 00:09:53.809 "name": "Nvme1", 00:09:53.809 "trtype": "tcp", 00:09:53.809 "traddr": "10.0.0.2", 00:09:53.809 "adrfam": "ipv4", 00:09:53.809 "trsvcid": "4420", 00:09:53.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.809 "hdgst": false, 00:09:53.809 "ddgst": false 00:09:53.809 }, 00:09:53.809 "method": "bdev_nvme_attach_controller" 00:09:53.809 }' 00:09:53.809 18:18:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:53.809 18:18:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:53.809 "params": { 00:09:53.809 "name": "Nvme1", 00:09:53.809 "trtype": "tcp", 00:09:53.809 "traddr": "10.0.0.2", 00:09:53.809 "adrfam": "ipv4", 00:09:53.809 "trsvcid": "4420", 00:09:53.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.809 "hdgst": false, 00:09:53.809 "ddgst": false 00:09:53.809 }, 00:09:53.809 "method": "bdev_nvme_attach_controller" 00:09:53.809 }' 00:09:53.809 18:18:52 -- nvmf/common.sh@544 -- # jq . 00:09:53.809 18:18:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:53.809 18:18:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:53.809 "params": { 00:09:53.809 "name": "Nvme1", 00:09:53.809 "trtype": "tcp", 00:09:53.809 "traddr": "10.0.0.2", 00:09:53.809 "adrfam": "ipv4", 00:09:53.809 "trsvcid": "4420", 00:09:53.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.809 "hdgst": false, 00:09:53.809 "ddgst": false 00:09:53.809 }, 00:09:53.809 "method": "bdev_nvme_attach_controller" 00:09:53.809 }' 00:09:53.809 [2024-11-17 18:18:52.060636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:53.809 18:18:52 -- nvmf/common.sh@544 -- # jq . 00:09:53.809 [2024-11-17 18:18:52.060866] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:53.809 18:18:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:53.809 18:18:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:53.809 "params": { 00:09:53.809 "name": "Nvme1", 00:09:53.809 "trtype": "tcp", 00:09:53.809 "traddr": "10.0.0.2", 00:09:53.809 "adrfam": "ipv4", 00:09:53.809 "trsvcid": "4420", 00:09:53.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.809 "hdgst": false, 00:09:53.809 "ddgst": false 00:09:53.809 }, 00:09:53.809 "method": "bdev_nvme_attach_controller" 00:09:53.809 }' 00:09:54.067 [2024-11-17 18:18:52.074302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:54.067 [2024-11-17 18:18:52.074595] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:54.067 18:18:52 -- target/bdev_io_wait.sh@37 -- # wait 73272 00:09:54.067 [2024-11-17 18:18:52.097238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:54.067 [2024-11-17 18:18:52.097840] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:54.067 [2024-11-17 18:18:52.098922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:54.068 [2024-11-17 18:18:52.099145] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:54.068 [2024-11-17 18:18:52.252355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.068 [2024-11-17 18:18:52.276788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.068 [2024-11-17 18:18:52.291181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.068 [2024-11-17 18:18:52.313062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.326 [2024-11-17 18:18:52.338380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.326 [2024-11-17 18:18:52.362303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:54.326 [2024-11-17 18:18:52.379778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.326 Running I/O for 1 seconds... 00:09:54.326 [2024-11-17 18:18:52.404847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:54.326 Running I/O for 1 seconds... 00:09:54.326 Running I/O for 1 seconds... 00:09:54.326 Running I/O for 1 seconds... 00:09:55.259 00:09:55.259 Latency(us) 00:09:55.259 [2024-11-17T18:18:53.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.259 [2024-11-17T18:18:53.526Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:55.259 Nvme1n1 : 1.02 6362.81 24.85 0.00 0.00 20009.23 8221.79 36938.47 00:09:55.259 [2024-11-17T18:18:53.526Z] =================================================================================================================== 00:09:55.259 [2024-11-17T18:18:53.526Z] Total : 6362.81 24.85 0.00 0.00 20009.23 8221.79 36938.47 00:09:55.259 00:09:55.259 Latency(us) 00:09:55.259 [2024-11-17T18:18:53.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.259 [2024-11-17T18:18:53.526Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:55.259 Nvme1n1 : 1.00 170322.17 665.32 0.00 0.00 748.90 338.85 1094.75 00:09:55.259 [2024-11-17T18:18:53.527Z] =================================================================================================================== 00:09:55.260 [2024-11-17T18:18:53.527Z] Total : 170322.17 665.32 0.00 0.00 748.90 338.85 1094.75 00:09:55.260 00:09:55.260 Latency(us) 00:09:55.260 [2024-11-17T18:18:53.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.260 [2024-11-17T18:18:53.527Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:55.260 Nvme1n1 : 1.01 8421.24 32.90 0.00 0.00 15115.13 10187.87 27763.43 00:09:55.260 [2024-11-17T18:18:53.527Z] =================================================================================================================== 00:09:55.260 [2024-11-17T18:18:53.527Z] Total : 8421.24 32.90 0.00 0.00 15115.13 10187.87 27763.43 00:09:55.518 00:09:55.518 Latency(us) 00:09:55.518 [2024-11-17T18:18:53.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.518 [2024-11-17T18:18:53.785Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:55.518 Nvme1n1 : 1.00 6343.66 24.78 0.00 0.00 20119.93 4944.99 48854.11 00:09:55.518 [2024-11-17T18:18:53.785Z] =================================================================================================================== 00:09:55.518 [2024-11-17T18:18:53.785Z] Total : 6343.66 24.78 0.00 0.00 20119.93 4944.99 48854.11 00:09:55.518 18:18:53 -- target/bdev_io_wait.sh@38 -- # wait 73274 00:09:55.518 18:18:53 -- target/bdev_io_wait.sh@39 -- # wait 73276 00:09:55.518 18:18:53 -- target/bdev_io_wait.sh@40 -- # wait 73278 00:09:55.518 18:18:53 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.518 18:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.518 18:18:53 -- common/autotest_common.sh@10 -- # set +x 00:09:55.518 18:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.518 18:18:53 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:55.518 18:18:53 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:55.518 18:18:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:55.518 18:18:53 -- nvmf/common.sh@116 -- # sync 00:09:55.518 18:18:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:55.518 18:18:53 -- nvmf/common.sh@119 -- # set +e 00:09:55.518 18:18:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:55.518 18:18:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:55.518 rmmod nvme_tcp 00:09:55.518 rmmod nvme_fabrics 00:09:55.518 rmmod nvme_keyring 00:09:55.518 18:18:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:55.518 18:18:53 -- nvmf/common.sh@123 -- # set -e 00:09:55.518 18:18:53 -- nvmf/common.sh@124 -- # return 0 00:09:55.518 18:18:53 -- nvmf/common.sh@477 -- # '[' -n 73244 ']' 00:09:55.518 18:18:53 -- nvmf/common.sh@478 -- # killprocess 73244 00:09:55.518 18:18:53 -- common/autotest_common.sh@936 -- # '[' -z 73244 ']' 00:09:55.518 18:18:53 -- common/autotest_common.sh@940 -- # kill -0 73244 00:09:55.518 18:18:53 -- common/autotest_common.sh@941 -- # uname 00:09:55.518 18:18:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:55.776 18:18:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73244 00:09:55.776 killing process with pid 73244 00:09:55.776 18:18:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:55.776 18:18:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:55.776 18:18:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73244' 00:09:55.776 18:18:53 -- common/autotest_common.sh@955 -- # kill 73244 00:09:55.776 18:18:53 -- common/autotest_common.sh@960 -- # wait 73244 00:09:55.776 18:18:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:55.776 18:18:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:55.776 18:18:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:55.776 18:18:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.776 18:18:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:55.776 18:18:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.776 18:18:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.776 18:18:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.776 18:18:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:55.776 ************************************ 00:09:55.776 END TEST nvmf_bdev_io_wait 00:09:55.776 ************************************ 00:09:55.776 00:09:55.776 real 0m2.957s 00:09:55.776 user 0m12.750s 00:09:55.776 sys 0m1.936s 00:09:55.776 18:18:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.776 18:18:53 -- common/autotest_common.sh@10 -- # set +x 00:09:55.776 18:18:54 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.776 18:18:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:55.776 18:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.776 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:55.776 ************************************ 00:09:55.776 START TEST nvmf_queue_depth 00:09:55.776 ************************************ 00:09:55.776 18:18:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:56.035 * Looking for test storage... 00:09:56.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.035 18:18:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:56.036 18:18:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:56.036 18:18:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:56.036 18:18:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:56.036 18:18:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:56.036 18:18:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:56.036 18:18:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:56.036 18:18:54 -- scripts/common.sh@335 -- # IFS=.-: 00:09:56.036 18:18:54 -- scripts/common.sh@335 -- # read -ra ver1 00:09:56.036 18:18:54 -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.036 18:18:54 -- scripts/common.sh@336 -- # read -ra ver2 00:09:56.036 18:18:54 -- scripts/common.sh@337 -- # local 'op=<' 00:09:56.036 18:18:54 -- scripts/common.sh@339 -- # ver1_l=2 00:09:56.036 18:18:54 -- scripts/common.sh@340 -- # ver2_l=1 00:09:56.036 18:18:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:56.036 18:18:54 -- scripts/common.sh@343 -- # case "$op" in 00:09:56.036 18:18:54 -- scripts/common.sh@344 -- # : 1 00:09:56.036 18:18:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:56.036 18:18:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.036 18:18:54 -- scripts/common.sh@364 -- # decimal 1 00:09:56.036 18:18:54 -- scripts/common.sh@352 -- # local d=1 00:09:56.036 18:18:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.036 18:18:54 -- scripts/common.sh@354 -- # echo 1 00:09:56.036 18:18:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:56.036 18:18:54 -- scripts/common.sh@365 -- # decimal 2 00:09:56.036 18:18:54 -- scripts/common.sh@352 -- # local d=2 00:09:56.036 18:18:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.036 18:18:54 -- scripts/common.sh@354 -- # echo 2 00:09:56.036 18:18:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:56.036 18:18:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:56.036 18:18:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:56.036 18:18:54 -- scripts/common.sh@367 -- # return 0 00:09:56.036 18:18:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.036 18:18:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.036 --rc genhtml_branch_coverage=1 00:09:56.036 --rc genhtml_function_coverage=1 00:09:56.036 --rc genhtml_legend=1 00:09:56.036 --rc geninfo_all_blocks=1 00:09:56.036 --rc geninfo_unexecuted_blocks=1 00:09:56.036 00:09:56.036 ' 00:09:56.036 18:18:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.036 --rc genhtml_branch_coverage=1 00:09:56.036 --rc genhtml_function_coverage=1 00:09:56.036 --rc genhtml_legend=1 00:09:56.036 --rc geninfo_all_blocks=1 00:09:56.036 --rc geninfo_unexecuted_blocks=1 00:09:56.036 00:09:56.036 ' 00:09:56.036 18:18:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.036 --rc genhtml_branch_coverage=1 00:09:56.036 --rc genhtml_function_coverage=1 00:09:56.036 --rc genhtml_legend=1 00:09:56.036 --rc geninfo_all_blocks=1 00:09:56.036 --rc geninfo_unexecuted_blocks=1 00:09:56.036 00:09:56.036 ' 00:09:56.036 18:18:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.036 --rc genhtml_branch_coverage=1 00:09:56.036 --rc genhtml_function_coverage=1 00:09:56.036 --rc genhtml_legend=1 00:09:56.036 --rc geninfo_all_blocks=1 00:09:56.036 --rc geninfo_unexecuted_blocks=1 00:09:56.036 00:09:56.036 ' 00:09:56.036 18:18:54 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.036 18:18:54 -- nvmf/common.sh@7 -- # uname -s 00:09:56.036 18:18:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.036 18:18:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.036 18:18:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.036 18:18:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.036 18:18:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.036 18:18:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.036 18:18:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.036 18:18:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.036 18:18:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.036 18:18:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.036 18:18:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:09:56.036 18:18:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:09:56.036 18:18:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.036 18:18:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.036 18:18:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.036 18:18:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.036 18:18:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.036 18:18:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.036 18:18:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.036 18:18:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.036 18:18:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.036 18:18:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.036 18:18:54 -- paths/export.sh@5 -- # export PATH 00:09:56.036 18:18:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.036 18:18:54 -- nvmf/common.sh@46 -- # : 0 00:09:56.036 18:18:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:56.036 18:18:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:56.036 18:18:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:56.036 18:18:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.036 18:18:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.036 18:18:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:56.036 18:18:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:56.036 18:18:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:56.036 18:18:54 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:56.036 18:18:54 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:56.036 18:18:54 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:56.036 18:18:54 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:56.036 18:18:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:56.036 18:18:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.036 18:18:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:56.036 18:18:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:56.036 18:18:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:56.036 18:18:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.036 18:18:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.036 18:18:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.036 18:18:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:56.036 18:18:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:56.036 18:18:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:56.036 18:18:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:56.036 18:18:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:56.036 18:18:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:56.036 18:18:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.036 18:18:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.036 18:18:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.036 18:18:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:56.036 18:18:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.036 18:18:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.036 18:18:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.036 18:18:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.036 18:18:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.036 18:18:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.036 18:18:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.036 18:18:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.036 18:18:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:56.036 18:18:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:56.036 Cannot find device "nvmf_tgt_br" 00:09:56.036 18:18:54 -- nvmf/common.sh@154 -- # true 00:09:56.036 18:18:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.036 Cannot find device "nvmf_tgt_br2" 00:09:56.036 18:18:54 -- nvmf/common.sh@155 -- # true 00:09:56.036 18:18:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:56.036 18:18:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:56.036 Cannot find device "nvmf_tgt_br" 00:09:56.036 18:18:54 -- nvmf/common.sh@157 -- # true 00:09:56.036 18:18:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:56.037 Cannot find device "nvmf_tgt_br2" 00:09:56.295 18:18:54 -- nvmf/common.sh@158 -- # true 00:09:56.295 18:18:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:56.295 18:18:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:56.295 18:18:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.295 18:18:54 -- nvmf/common.sh@161 -- # true 00:09:56.295 18:18:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.295 18:18:54 -- nvmf/common.sh@162 -- # true 00:09:56.295 18:18:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.295 18:18:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.295 18:18:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.295 18:18:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.295 18:18:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.295 18:18:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.295 18:18:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.295 18:18:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.295 18:18:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.295 18:18:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:56.295 18:18:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:56.295 18:18:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:56.295 18:18:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:56.295 18:18:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.295 18:18:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.295 18:18:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.295 18:18:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:56.295 18:18:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:56.295 18:18:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.295 18:18:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.295 18:18:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.295 18:18:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.295 18:18:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.295 18:18:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:56.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:56.295 00:09:56.295 --- 10.0.0.2 ping statistics --- 00:09:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.295 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:56.295 18:18:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:56.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:56.295 00:09:56.295 --- 10.0.0.3 ping statistics --- 00:09:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.295 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:56.296 18:18:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:56.296 00:09:56.296 --- 10.0.0.1 ping statistics --- 00:09:56.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.296 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:56.296 18:18:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.296 18:18:54 -- nvmf/common.sh@421 -- # return 0 00:09:56.296 18:18:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:56.296 18:18:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.296 18:18:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:56.296 18:18:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:56.296 18:18:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.296 18:18:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:56.296 18:18:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:56.554 18:18:54 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:56.554 18:18:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:56.554 18:18:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.554 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.554 18:18:54 -- nvmf/common.sh@469 -- # nvmfpid=73490 00:09:56.554 18:18:54 -- nvmf/common.sh@470 -- # waitforlisten 73490 00:09:56.554 18:18:54 -- common/autotest_common.sh@829 -- # '[' -z 73490 ']' 00:09:56.554 18:18:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:56.554 18:18:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.554 18:18:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.554 18:18:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.554 18:18:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.554 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.554 [2024-11-17 18:18:54.624337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:56.554 [2024-11-17 18:18:54.624419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.554 [2024-11-17 18:18:54.760683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.555 [2024-11-17 18:18:54.793696] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:56.555 [2024-11-17 18:18:54.793845] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.555 [2024-11-17 18:18:54.793858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.555 [2024-11-17 18:18:54.793866] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.555 [2024-11-17 18:18:54.793889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.813 18:18:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.813 18:18:54 -- common/autotest_common.sh@862 -- # return 0 00:09:56.813 18:18:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:56.813 18:18:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 18:18:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.813 18:18:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.813 18:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 [2024-11-17 18:18:54.922082] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.813 18:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.813 18:18:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.813 18:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 Malloc0 00:09:56.813 18:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.813 18:18:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.813 18:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 18:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.813 18:18:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.813 18:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 18:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.813 18:18:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.813 18:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 [2024-11-17 18:18:54.976055] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:56.813 18:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.813 18:18:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=73515 00:09:56.813 18:18:54 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:56.813 18:18:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:56.813 18:18:54 -- target/queue_depth.sh@33 -- # waitforlisten 73515 /var/tmp/bdevperf.sock 00:09:56.813 18:18:54 -- common/autotest_common.sh@829 -- # '[' -z 73515 ']' 00:09:56.813 18:18:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:56.813 18:18:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.813 18:18:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:56.813 18:18:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.813 18:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:56.813 [2024-11-17 18:18:55.020212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:56.813 [2024-11-17 18:18:55.020662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73515 ] 00:09:57.072 [2024-11-17 18:18:55.152738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.072 [2024-11-17 18:18:55.186084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.072 18:18:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.072 18:18:55 -- common/autotest_common.sh@862 -- # return 0 00:09:57.072 18:18:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:57.072 18:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.072 18:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 NVMe0n1 00:09:57.330 18:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.330 18:18:55 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:57.330 Running I/O for 10 seconds... 00:10:07.306 00:10:07.306 Latency(us) 00:10:07.306 [2024-11-17T18:19:05.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.306 [2024-11-17T18:19:05.573Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:07.306 Verification LBA range: start 0x0 length 0x4000 00:10:07.306 NVMe0n1 : 10.06 15108.13 59.02 0.00 0.00 67534.84 13583.83 56480.12 00:10:07.306 [2024-11-17T18:19:05.573Z] =================================================================================================================== 00:10:07.306 [2024-11-17T18:19:05.573Z] Total : 15108.13 59.02 0.00 0.00 67534.84 13583.83 56480.12 00:10:07.306 0 00:10:07.306 18:19:05 -- target/queue_depth.sh@39 -- # killprocess 73515 00:10:07.306 18:19:05 -- common/autotest_common.sh@936 -- # '[' -z 73515 ']' 00:10:07.306 18:19:05 -- common/autotest_common.sh@940 -- # kill -0 73515 00:10:07.306 18:19:05 -- common/autotest_common.sh@941 -- # uname 00:10:07.306 18:19:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.306 18:19:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73515 00:10:07.565 killing process with pid 73515 00:10:07.565 Received shutdown signal, test time was about 10.000000 seconds 00:10:07.565 00:10:07.565 Latency(us) 00:10:07.565 [2024-11-17T18:19:05.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.565 [2024-11-17T18:19:05.832Z] =================================================================================================================== 00:10:07.565 [2024-11-17T18:19:05.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:07.565 18:19:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.565 18:19:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.565 18:19:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73515' 00:10:07.565 18:19:05 -- common/autotest_common.sh@955 -- # kill 73515 00:10:07.565 18:19:05 -- common/autotest_common.sh@960 -- # wait 73515 00:10:07.565 18:19:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:07.565 18:19:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:07.565 18:19:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:07.565 18:19:05 -- nvmf/common.sh@116 -- # sync 00:10:07.565 18:19:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:07.565 18:19:05 -- nvmf/common.sh@119 -- # set +e 00:10:07.565 18:19:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:07.565 18:19:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:07.565 rmmod nvme_tcp 00:10:07.565 rmmod nvme_fabrics 00:10:07.565 rmmod nvme_keyring 00:10:07.823 18:19:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:07.823 18:19:05 -- nvmf/common.sh@123 -- # set -e 00:10:07.823 18:19:05 -- nvmf/common.sh@124 -- # return 0 00:10:07.823 18:19:05 -- nvmf/common.sh@477 -- # '[' -n 73490 ']' 00:10:07.823 18:19:05 -- nvmf/common.sh@478 -- # killprocess 73490 00:10:07.823 18:19:05 -- common/autotest_common.sh@936 -- # '[' -z 73490 ']' 00:10:07.823 18:19:05 -- common/autotest_common.sh@940 -- # kill -0 73490 00:10:07.823 18:19:05 -- common/autotest_common.sh@941 -- # uname 00:10:07.823 18:19:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.823 18:19:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73490 00:10:07.823 killing process with pid 73490 00:10:07.823 18:19:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:07.823 18:19:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:07.823 18:19:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73490' 00:10:07.823 18:19:05 -- common/autotest_common.sh@955 -- # kill 73490 00:10:07.823 18:19:05 -- common/autotest_common.sh@960 -- # wait 73490 00:10:07.823 18:19:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:07.823 18:19:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:07.823 18:19:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:07.823 18:19:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.823 18:19:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:07.823 18:19:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.823 18:19:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.823 18:19:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.823 18:19:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:07.823 00:10:07.823 real 0m12.042s 00:10:07.823 user 0m20.979s 00:10:07.823 sys 0m1.855s 00:10:07.823 18:19:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:07.823 18:19:06 -- common/autotest_common.sh@10 -- # set +x 00:10:07.823 ************************************ 00:10:07.823 END TEST nvmf_queue_depth 00:10:07.823 ************************************ 00:10:08.082 18:19:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:08.082 18:19:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:08.082 18:19:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.082 18:19:06 -- common/autotest_common.sh@10 -- # set +x 00:10:08.082 ************************************ 00:10:08.082 START TEST nvmf_multipath 00:10:08.082 ************************************ 00:10:08.082 18:19:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:08.082 * Looking for test storage... 00:10:08.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:08.082 18:19:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:08.082 18:19:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:08.082 18:19:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:08.082 18:19:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:08.082 18:19:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:08.082 18:19:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:08.082 18:19:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:08.082 18:19:06 -- scripts/common.sh@335 -- # IFS=.-: 00:10:08.082 18:19:06 -- scripts/common.sh@335 -- # read -ra ver1 00:10:08.082 18:19:06 -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.082 18:19:06 -- scripts/common.sh@336 -- # read -ra ver2 00:10:08.082 18:19:06 -- scripts/common.sh@337 -- # local 'op=<' 00:10:08.082 18:19:06 -- scripts/common.sh@339 -- # ver1_l=2 00:10:08.082 18:19:06 -- scripts/common.sh@340 -- # ver2_l=1 00:10:08.082 18:19:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:08.082 18:19:06 -- scripts/common.sh@343 -- # case "$op" in 00:10:08.082 18:19:06 -- scripts/common.sh@344 -- # : 1 00:10:08.082 18:19:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:08.082 18:19:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.082 18:19:06 -- scripts/common.sh@364 -- # decimal 1 00:10:08.082 18:19:06 -- scripts/common.sh@352 -- # local d=1 00:10:08.082 18:19:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.082 18:19:06 -- scripts/common.sh@354 -- # echo 1 00:10:08.082 18:19:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:08.082 18:19:06 -- scripts/common.sh@365 -- # decimal 2 00:10:08.082 18:19:06 -- scripts/common.sh@352 -- # local d=2 00:10:08.082 18:19:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.082 18:19:06 -- scripts/common.sh@354 -- # echo 2 00:10:08.082 18:19:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:08.082 18:19:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:08.082 18:19:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:08.082 18:19:06 -- scripts/common.sh@367 -- # return 0 00:10:08.082 18:19:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.082 18:19:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.082 --rc genhtml_branch_coverage=1 00:10:08.082 --rc genhtml_function_coverage=1 00:10:08.082 --rc genhtml_legend=1 00:10:08.082 --rc geninfo_all_blocks=1 00:10:08.082 --rc geninfo_unexecuted_blocks=1 00:10:08.082 00:10:08.082 ' 00:10:08.082 18:19:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.082 --rc genhtml_branch_coverage=1 00:10:08.082 --rc genhtml_function_coverage=1 00:10:08.082 --rc genhtml_legend=1 00:10:08.082 --rc geninfo_all_blocks=1 00:10:08.082 --rc geninfo_unexecuted_blocks=1 00:10:08.082 00:10:08.082 ' 00:10:08.082 18:19:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.082 --rc genhtml_branch_coverage=1 00:10:08.082 --rc genhtml_function_coverage=1 00:10:08.082 --rc genhtml_legend=1 00:10:08.082 --rc geninfo_all_blocks=1 00:10:08.082 --rc geninfo_unexecuted_blocks=1 00:10:08.082 00:10:08.082 ' 00:10:08.082 18:19:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:08.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.082 --rc genhtml_branch_coverage=1 00:10:08.082 --rc genhtml_function_coverage=1 00:10:08.082 --rc genhtml_legend=1 00:10:08.082 --rc geninfo_all_blocks=1 00:10:08.082 --rc geninfo_unexecuted_blocks=1 00:10:08.082 00:10:08.082 ' 00:10:08.082 18:19:06 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.082 18:19:06 -- nvmf/common.sh@7 -- # uname -s 00:10:08.082 18:19:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.082 18:19:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.082 18:19:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.082 18:19:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.082 18:19:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.082 18:19:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.082 18:19:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.082 18:19:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.082 18:19:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.082 18:19:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.082 18:19:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:10:08.082 18:19:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:10:08.082 18:19:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.082 18:19:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.082 18:19:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.083 18:19:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.083 18:19:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.083 18:19:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.083 18:19:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.083 18:19:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.083 18:19:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.083 18:19:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.083 18:19:06 -- paths/export.sh@5 -- # export PATH 00:10:08.083 18:19:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.083 18:19:06 -- nvmf/common.sh@46 -- # : 0 00:10:08.083 18:19:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:08.083 18:19:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:08.083 18:19:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:08.083 18:19:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.083 18:19:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.083 18:19:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:08.083 18:19:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:08.083 18:19:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:08.083 18:19:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.083 18:19:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.083 18:19:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:08.083 18:19:06 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.083 18:19:06 -- target/multipath.sh@43 -- # nvmftestinit 00:10:08.083 18:19:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:08.083 18:19:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.083 18:19:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:08.083 18:19:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:08.083 18:19:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:08.083 18:19:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.083 18:19:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.083 18:19:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.083 18:19:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:08.083 18:19:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:08.083 18:19:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:08.083 18:19:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:08.083 18:19:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:08.083 18:19:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:08.083 18:19:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.083 18:19:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.083 18:19:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:08.083 18:19:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:08.083 18:19:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.083 18:19:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.083 18:19:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.083 18:19:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.083 18:19:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.083 18:19:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.083 18:19:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.083 18:19:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.083 18:19:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:08.083 18:19:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:08.083 Cannot find device "nvmf_tgt_br" 00:10:08.083 18:19:06 -- nvmf/common.sh@154 -- # true 00:10:08.083 18:19:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.341 Cannot find device "nvmf_tgt_br2" 00:10:08.341 18:19:06 -- nvmf/common.sh@155 -- # true 00:10:08.341 18:19:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:08.341 18:19:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:08.341 Cannot find device "nvmf_tgt_br" 00:10:08.341 18:19:06 -- nvmf/common.sh@157 -- # true 00:10:08.341 18:19:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:08.341 Cannot find device "nvmf_tgt_br2" 00:10:08.341 18:19:06 -- nvmf/common.sh@158 -- # true 00:10:08.341 18:19:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:08.341 18:19:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:08.341 18:19:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.341 18:19:06 -- nvmf/common.sh@161 -- # true 00:10:08.341 18:19:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.341 18:19:06 -- nvmf/common.sh@162 -- # true 00:10:08.341 18:19:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.341 18:19:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.341 18:19:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.341 18:19:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.341 18:19:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.341 18:19:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.341 18:19:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.341 18:19:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:08.341 18:19:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:08.341 18:19:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:08.341 18:19:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:08.341 18:19:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:08.341 18:19:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:08.341 18:19:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.341 18:19:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.341 18:19:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.341 18:19:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:08.341 18:19:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:08.341 18:19:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.341 18:19:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.341 18:19:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.601 18:19:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.601 18:19:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.601 18:19:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:08.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:10:08.601 00:10:08.601 --- 10.0.0.2 ping statistics --- 00:10:08.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.601 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:08.601 18:19:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:08.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:08.601 00:10:08.601 --- 10.0.0.3 ping statistics --- 00:10:08.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.601 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:08.601 18:19:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:08.601 00:10:08.601 --- 10.0.0.1 ping statistics --- 00:10:08.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.601 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:08.601 18:19:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.601 18:19:06 -- nvmf/common.sh@421 -- # return 0 00:10:08.601 18:19:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:08.601 18:19:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.601 18:19:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:08.601 18:19:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:08.601 18:19:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.601 18:19:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:08.601 18:19:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:08.601 18:19:06 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:08.601 18:19:06 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:08.601 18:19:06 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:08.601 18:19:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:08.601 18:19:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.601 18:19:06 -- common/autotest_common.sh@10 -- # set +x 00:10:08.601 18:19:06 -- nvmf/common.sh@469 -- # nvmfpid=73828 00:10:08.601 18:19:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.601 18:19:06 -- nvmf/common.sh@470 -- # waitforlisten 73828 00:10:08.601 18:19:06 -- common/autotest_common.sh@829 -- # '[' -z 73828 ']' 00:10:08.601 18:19:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.601 18:19:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.601 18:19:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.601 18:19:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.601 18:19:06 -- common/autotest_common.sh@10 -- # set +x 00:10:08.601 [2024-11-17 18:19:06.711635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:08.601 [2024-11-17 18:19:06.711728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.601 [2024-11-17 18:19:06.852793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.859 [2024-11-17 18:19:06.896238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.859 [2024-11-17 18:19:06.896706] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.859 [2024-11-17 18:19:06.896880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.859 [2024-11-17 18:19:06.897045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.859 [2024-11-17 18:19:06.897303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.859 [2024-11-17 18:19:06.897423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.859 [2024-11-17 18:19:06.897415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.859 [2024-11-17 18:19:06.897349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.791 18:19:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.791 18:19:07 -- common/autotest_common.sh@862 -- # return 0 00:10:09.791 18:19:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:09.791 18:19:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.791 18:19:07 -- common/autotest_common.sh@10 -- # set +x 00:10:09.791 18:19:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.792 18:19:07 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:09.792 [2024-11-17 18:19:08.036207] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.049 18:19:08 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:10.319 Malloc0 00:10:10.319 18:19:08 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:10.590 18:19:08 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.848 18:19:08 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.848 [2024-11-17 18:19:09.092835] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.106 18:19:09 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.106 [2024-11-17 18:19:09.329028] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.106 18:19:09 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:11.364 18:19:09 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:11.364 18:19:09 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.364 18:19:09 -- common/autotest_common.sh@1187 -- # local i=0 00:10:11.364 18:19:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.364 18:19:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:11.364 18:19:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:13.895 18:19:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:13.895 18:19:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:13.895 18:19:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.895 18:19:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:13.896 18:19:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.896 18:19:11 -- common/autotest_common.sh@1197 -- # return 0 00:10:13.896 18:19:11 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:13.896 18:19:11 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:13.896 18:19:11 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:13.896 18:19:11 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:13.896 18:19:11 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:13.896 18:19:11 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:13.896 18:19:11 -- target/multipath.sh@38 -- # return 0 00:10:13.896 18:19:11 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:13.896 18:19:11 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:13.896 18:19:11 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:13.896 18:19:11 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:13.896 18:19:11 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:13.896 18:19:11 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:13.896 18:19:11 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:13.896 18:19:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:13.896 18:19:11 -- target/multipath.sh@22 -- # local timeout=20 00:10:13.896 18:19:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:13.896 18:19:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:13.896 18:19:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:13.896 18:19:11 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:13.896 18:19:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:13.896 18:19:11 -- target/multipath.sh@22 -- # local timeout=20 00:10:13.896 18:19:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:13.896 18:19:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:13.896 18:19:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:13.896 18:19:11 -- target/multipath.sh@85 -- # echo numa 00:10:13.896 18:19:11 -- target/multipath.sh@88 -- # fio_pid=73923 00:10:13.896 18:19:11 -- target/multipath.sh@90 -- # sleep 1 00:10:13.896 18:19:11 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:13.896 [global] 00:10:13.896 thread=1 00:10:13.896 invalidate=1 00:10:13.896 rw=randrw 00:10:13.896 time_based=1 00:10:13.896 runtime=6 00:10:13.896 ioengine=libaio 00:10:13.896 direct=1 00:10:13.896 bs=4096 00:10:13.896 iodepth=128 00:10:13.896 norandommap=0 00:10:13.896 numjobs=1 00:10:13.896 00:10:13.896 verify_dump=1 00:10:13.896 verify_backlog=512 00:10:13.896 verify_state_save=0 00:10:13.896 do_verify=1 00:10:13.896 verify=crc32c-intel 00:10:13.896 [job0] 00:10:13.896 filename=/dev/nvme0n1 00:10:13.896 Could not set queue depth (nvme0n1) 00:10:13.896 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.896 fio-3.35 00:10:13.896 Starting 1 thread 00:10:14.464 18:19:12 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:14.722 18:19:12 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:14.980 18:19:13 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:14.980 18:19:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:14.980 18:19:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:14.980 18:19:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:14.980 18:19:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:14.980 18:19:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:14.980 18:19:13 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:14.980 18:19:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:14.980 18:19:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:14.980 18:19:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:14.980 18:19:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:14.980 18:19:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:14.980 18:19:13 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:15.547 18:19:13 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:15.805 18:19:13 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:15.805 18:19:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:15.805 18:19:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:15.805 18:19:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:15.805 18:19:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:15.805 18:19:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:15.805 18:19:13 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:15.805 18:19:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:15.805 18:19:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:15.805 18:19:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:15.805 18:19:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:15.805 18:19:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:15.806 18:19:13 -- target/multipath.sh@104 -- # wait 73923 00:10:19.996 00:10:19.996 job0: (groupid=0, jobs=1): err= 0: pid=73944: Sun Nov 17 18:19:17 2024 00:10:19.996 read: IOPS=11.0k, BW=42.9MiB/s (44.9MB/s)(257MiB/6006msec) 00:10:19.996 slat (usec): min=2, max=5640, avg=52.69, stdev=218.64 00:10:19.996 clat (usec): min=940, max=22142, avg=7876.53, stdev=1459.93 00:10:19.996 lat (usec): min=980, max=22151, avg=7929.22, stdev=1464.47 00:10:19.996 clat percentiles (usec): 00:10:19.996 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7046], 00:10:19.996 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:10:19.996 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10945], 00:10:19.996 | 99.00th=[12387], 99.50th=[12780], 99.90th=[18220], 99.95th=[21103], 00:10:19.996 | 99.99th=[22152] 00:10:19.996 bw ( KiB/s): min= 8624, max=29256, per=52.09%, avg=22866.64, stdev=6647.01, samples=11 00:10:19.996 iops : min= 2156, max= 7314, avg=5716.64, stdev=1661.77, samples=11 00:10:19.996 write: IOPS=6500, BW=25.4MiB/s (26.6MB/s)(135MiB/5336msec); 0 zone resets 00:10:19.996 slat (usec): min=4, max=13266, avg=62.93, stdev=166.12 00:10:19.996 clat (usec): min=782, max=21740, avg=6995.65, stdev=1371.48 00:10:19.996 lat (usec): min=833, max=21789, avg=7058.58, stdev=1376.30 00:10:19.996 clat percentiles (usec): 00:10:19.996 | 1.00th=[ 3195], 5.00th=[ 4146], 10.00th=[ 5407], 20.00th=[ 6390], 00:10:19.996 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:10:19.996 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8455], 00:10:19.996 | 99.00th=[10945], 99.50th=[11731], 99.90th=[20055], 99.95th=[21103], 00:10:19.996 | 99.99th=[21627] 00:10:19.996 bw ( KiB/s): min= 9064, max=28376, per=88.12%, avg=22910.64, stdev=6274.86, samples=11 00:10:19.996 iops : min= 2266, max= 7094, avg=5727.64, stdev=1568.73, samples=11 00:10:19.996 lat (usec) : 1000=0.01% 00:10:19.996 lat (msec) : 2=0.03%, 4=1.93%, 10=92.82%, 20=5.13%, 50=0.08% 00:10:19.996 cpu : usr=6.00%, sys=22.58%, ctx=5794, majf=0, minf=114 00:10:19.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:19.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.996 issued rwts: total=65906,34684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.997 00:10:19.997 Run status group 0 (all jobs): 00:10:19.997 READ: bw=42.9MiB/s (44.9MB/s), 42.9MiB/s-42.9MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6006-6006msec 00:10:19.997 WRITE: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=135MiB (142MB), run=5336-5336msec 00:10:19.997 00:10:19.997 Disk stats (read/write): 00:10:19.997 nvme0n1: ios=64997/34053, merge=0/0, ticks=487010/221447, in_queue=708457, util=98.40% 00:10:19.997 18:19:17 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:20.255 18:19:18 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:20.255 18:19:18 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:20.255 18:19:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:20.255 18:19:18 -- target/multipath.sh@22 -- # local timeout=20 00:10:20.255 18:19:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:20.255 18:19:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:20.255 18:19:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:20.255 18:19:18 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:20.255 18:19:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:20.255 18:19:18 -- target/multipath.sh@22 -- # local timeout=20 00:10:20.256 18:19:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:20.256 18:19:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:20.256 18:19:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:20.256 18:19:18 -- target/multipath.sh@113 -- # echo round-robin 00:10:20.256 18:19:18 -- target/multipath.sh@116 -- # fio_pid=74029 00:10:20.256 18:19:18 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:20.256 18:19:18 -- target/multipath.sh@118 -- # sleep 1 00:10:20.515 [global] 00:10:20.515 thread=1 00:10:20.515 invalidate=1 00:10:20.515 rw=randrw 00:10:20.515 time_based=1 00:10:20.515 runtime=6 00:10:20.515 ioengine=libaio 00:10:20.515 direct=1 00:10:20.515 bs=4096 00:10:20.515 iodepth=128 00:10:20.515 norandommap=0 00:10:20.515 numjobs=1 00:10:20.515 00:10:20.515 verify_dump=1 00:10:20.515 verify_backlog=512 00:10:20.515 verify_state_save=0 00:10:20.515 do_verify=1 00:10:20.515 verify=crc32c-intel 00:10:20.515 [job0] 00:10:20.515 filename=/dev/nvme0n1 00:10:20.515 Could not set queue depth (nvme0n1) 00:10:20.515 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.515 fio-3.35 00:10:20.515 Starting 1 thread 00:10:21.452 18:19:19 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:21.712 18:19:19 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:21.971 18:19:20 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:21.971 18:19:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:21.971 18:19:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:21.971 18:19:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:21.971 18:19:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:21.971 18:19:20 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:21.971 18:19:20 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:21.971 18:19:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:21.971 18:19:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:21.971 18:19:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:21.971 18:19:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:21.971 18:19:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:21.971 18:19:20 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:22.230 18:19:20 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:22.490 18:19:20 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:22.490 18:19:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:22.490 18:19:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:22.490 18:19:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:22.490 18:19:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:22.490 18:19:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:22.490 18:19:20 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:22.490 18:19:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:22.490 18:19:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:22.490 18:19:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:22.490 18:19:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:22.490 18:19:20 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:22.490 18:19:20 -- target/multipath.sh@132 -- # wait 74029 00:10:26.718 00:10:26.718 job0: (groupid=0, jobs=1): err= 0: pid=74050: Sun Nov 17 18:19:24 2024 00:10:26.718 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(288MiB/6002msec) 00:10:26.718 slat (usec): min=2, max=7054, avg=40.68, stdev=199.12 00:10:26.718 clat (usec): min=375, max=16339, avg=7168.39, stdev=1802.89 00:10:26.718 lat (usec): min=384, max=16347, avg=7209.07, stdev=1816.79 00:10:26.718 clat percentiles (usec): 00:10:26.718 | 1.00th=[ 3130], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5604], 00:10:26.718 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7701], 00:10:26.718 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[10421], 00:10:26.718 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13304], 99.95th=[14091], 00:10:26.718 | 99.99th=[15401] 00:10:26.718 bw ( KiB/s): min=14600, max=42000, per=53.83%, avg=26438.55, stdev=8193.30, samples=11 00:10:26.718 iops : min= 3650, max=10500, avg=6609.64, stdev=2048.32, samples=11 00:10:26.718 write: IOPS=7252, BW=28.3MiB/s (29.7MB/s)(149MiB/5258msec); 0 zone resets 00:10:26.718 slat (usec): min=4, max=7724, avg=51.11, stdev=134.47 00:10:26.718 clat (usec): min=1390, max=15290, avg=6128.58, stdev=1785.41 00:10:26.718 lat (usec): min=1423, max=15317, avg=6179.68, stdev=1800.71 00:10:26.718 clat percentiles (usec): 00:10:26.718 | 1.00th=[ 2409], 5.00th=[ 3097], 10.00th=[ 3523], 20.00th=[ 4178], 00:10:26.718 | 30.00th=[ 4948], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7046], 00:10:26.718 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:10:26.718 | 99.00th=[10421], 99.50th=[11338], 99.90th=[13566], 99.95th=[14222], 00:10:26.718 | 99.99th=[15270] 00:10:26.718 bw ( KiB/s): min=14864, max=42576, per=90.95%, avg=26384.00, stdev=8076.77, samples=11 00:10:26.718 iops : min= 3716, max=10644, avg=6596.00, stdev=2019.19, samples=11 00:10:26.718 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:10:26.718 lat (msec) : 2=0.19%, 4=8.43%, 10=87.18%, 20=4.15% 00:10:26.718 cpu : usr=5.87%, sys=23.33%, ctx=6022, majf=0, minf=114 00:10:26.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:26.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.718 issued rwts: total=73694,38134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.718 00:10:26.718 Run status group 0 (all jobs): 00:10:26.718 READ: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=288MiB (302MB), run=6002-6002msec 00:10:26.718 WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=149MiB (156MB), run=5258-5258msec 00:10:26.718 00:10:26.718 Disk stats (read/write): 00:10:26.718 nvme0n1: ios=72095/38134, merge=0/0, ticks=490966/216986, in_queue=707952, util=98.66% 00:10:26.718 18:19:24 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:26.718 18:19:24 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.718 18:19:24 -- common/autotest_common.sh@1208 -- # local i=0 00:10:26.718 18:19:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:26.718 18:19:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.718 18:19:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:26.718 18:19:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.718 18:19:24 -- common/autotest_common.sh@1220 -- # return 0 00:10:26.718 18:19:24 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.977 18:19:25 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:26.977 18:19:25 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:26.978 18:19:25 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:26.978 18:19:25 -- target/multipath.sh@144 -- # nvmftestfini 00:10:26.978 18:19:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:26.978 18:19:25 -- nvmf/common.sh@116 -- # sync 00:10:26.978 18:19:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:26.978 18:19:25 -- nvmf/common.sh@119 -- # set +e 00:10:26.978 18:19:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:26.978 18:19:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:26.978 rmmod nvme_tcp 00:10:26.978 rmmod nvme_fabrics 00:10:26.978 rmmod nvme_keyring 00:10:27.237 18:19:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:27.237 18:19:25 -- nvmf/common.sh@123 -- # set -e 00:10:27.237 18:19:25 -- nvmf/common.sh@124 -- # return 0 00:10:27.237 18:19:25 -- nvmf/common.sh@477 -- # '[' -n 73828 ']' 00:10:27.237 18:19:25 -- nvmf/common.sh@478 -- # killprocess 73828 00:10:27.237 18:19:25 -- common/autotest_common.sh@936 -- # '[' -z 73828 ']' 00:10:27.237 18:19:25 -- common/autotest_common.sh@940 -- # kill -0 73828 00:10:27.237 18:19:25 -- common/autotest_common.sh@941 -- # uname 00:10:27.237 18:19:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.237 18:19:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73828 00:10:27.237 killing process with pid 73828 00:10:27.237 18:19:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:27.237 18:19:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:27.237 18:19:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73828' 00:10:27.237 18:19:25 -- common/autotest_common.sh@955 -- # kill 73828 00:10:27.237 18:19:25 -- common/autotest_common.sh@960 -- # wait 73828 00:10:27.237 18:19:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:27.237 18:19:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:27.237 18:19:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:27.237 18:19:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.237 18:19:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:27.237 18:19:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.237 18:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.237 18:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.237 18:19:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:27.237 ************************************ 00:10:27.237 END TEST nvmf_multipath 00:10:27.237 ************************************ 00:10:27.237 00:10:27.237 real 0m19.369s 00:10:27.237 user 1m12.796s 00:10:27.237 sys 0m9.987s 00:10:27.237 18:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:27.237 18:19:25 -- common/autotest_common.sh@10 -- # set +x 00:10:27.496 18:19:25 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:27.496 18:19:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:27.496 18:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.496 18:19:25 -- common/autotest_common.sh@10 -- # set +x 00:10:27.496 ************************************ 00:10:27.496 START TEST nvmf_zcopy 00:10:27.496 ************************************ 00:10:27.496 18:19:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:27.496 * Looking for test storage... 00:10:27.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.496 18:19:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:27.496 18:19:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:27.496 18:19:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:27.496 18:19:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:27.496 18:19:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:27.496 18:19:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:27.496 18:19:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:27.496 18:19:25 -- scripts/common.sh@335 -- # IFS=.-: 00:10:27.496 18:19:25 -- scripts/common.sh@335 -- # read -ra ver1 00:10:27.496 18:19:25 -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.496 18:19:25 -- scripts/common.sh@336 -- # read -ra ver2 00:10:27.496 18:19:25 -- scripts/common.sh@337 -- # local 'op=<' 00:10:27.496 18:19:25 -- scripts/common.sh@339 -- # ver1_l=2 00:10:27.496 18:19:25 -- scripts/common.sh@340 -- # ver2_l=1 00:10:27.496 18:19:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:27.496 18:19:25 -- scripts/common.sh@343 -- # case "$op" in 00:10:27.496 18:19:25 -- scripts/common.sh@344 -- # : 1 00:10:27.496 18:19:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:27.496 18:19:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.496 18:19:25 -- scripts/common.sh@364 -- # decimal 1 00:10:27.496 18:19:25 -- scripts/common.sh@352 -- # local d=1 00:10:27.496 18:19:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.496 18:19:25 -- scripts/common.sh@354 -- # echo 1 00:10:27.496 18:19:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:27.496 18:19:25 -- scripts/common.sh@365 -- # decimal 2 00:10:27.496 18:19:25 -- scripts/common.sh@352 -- # local d=2 00:10:27.496 18:19:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.496 18:19:25 -- scripts/common.sh@354 -- # echo 2 00:10:27.496 18:19:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:27.496 18:19:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:27.496 18:19:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:27.496 18:19:25 -- scripts/common.sh@367 -- # return 0 00:10:27.496 18:19:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.496 18:19:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:27.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.496 --rc genhtml_branch_coverage=1 00:10:27.496 --rc genhtml_function_coverage=1 00:10:27.496 --rc genhtml_legend=1 00:10:27.496 --rc geninfo_all_blocks=1 00:10:27.496 --rc geninfo_unexecuted_blocks=1 00:10:27.496 00:10:27.496 ' 00:10:27.496 18:19:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:27.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.496 --rc genhtml_branch_coverage=1 00:10:27.496 --rc genhtml_function_coverage=1 00:10:27.496 --rc genhtml_legend=1 00:10:27.496 --rc geninfo_all_blocks=1 00:10:27.496 --rc geninfo_unexecuted_blocks=1 00:10:27.496 00:10:27.496 ' 00:10:27.496 18:19:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:27.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.497 --rc genhtml_branch_coverage=1 00:10:27.497 --rc genhtml_function_coverage=1 00:10:27.497 --rc genhtml_legend=1 00:10:27.497 --rc geninfo_all_blocks=1 00:10:27.497 --rc geninfo_unexecuted_blocks=1 00:10:27.497 00:10:27.497 ' 00:10:27.497 18:19:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:27.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.497 --rc genhtml_branch_coverage=1 00:10:27.497 --rc genhtml_function_coverage=1 00:10:27.497 --rc genhtml_legend=1 00:10:27.497 --rc geninfo_all_blocks=1 00:10:27.497 --rc geninfo_unexecuted_blocks=1 00:10:27.497 00:10:27.497 ' 00:10:27.497 18:19:25 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.497 18:19:25 -- nvmf/common.sh@7 -- # uname -s 00:10:27.497 18:19:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.497 18:19:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.497 18:19:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.497 18:19:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.497 18:19:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.497 18:19:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.497 18:19:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.497 18:19:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.497 18:19:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.497 18:19:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.497 18:19:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:10:27.497 18:19:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:10:27.497 18:19:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.497 18:19:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.497 18:19:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.497 18:19:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.497 18:19:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.497 18:19:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.497 18:19:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.497 18:19:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.497 18:19:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.497 18:19:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.497 18:19:25 -- paths/export.sh@5 -- # export PATH 00:10:27.497 18:19:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.497 18:19:25 -- nvmf/common.sh@46 -- # : 0 00:10:27.497 18:19:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:27.497 18:19:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:27.497 18:19:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:27.497 18:19:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.497 18:19:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.497 18:19:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:27.497 18:19:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:27.497 18:19:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:27.497 18:19:25 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:27.497 18:19:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:27.497 18:19:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.497 18:19:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:27.497 18:19:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:27.497 18:19:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:27.497 18:19:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.497 18:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.497 18:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.497 18:19:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:27.497 18:19:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:27.497 18:19:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:27.497 18:19:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:27.497 18:19:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:27.497 18:19:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:27.497 18:19:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.497 18:19:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.497 18:19:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:27.497 18:19:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:27.497 18:19:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:27.497 18:19:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:27.497 18:19:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:27.497 18:19:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.497 18:19:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:27.497 18:19:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:27.497 18:19:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:27.497 18:19:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:27.497 18:19:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:27.756 18:19:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:27.756 Cannot find device "nvmf_tgt_br" 00:10:27.756 18:19:25 -- nvmf/common.sh@154 -- # true 00:10:27.756 18:19:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.756 Cannot find device "nvmf_tgt_br2" 00:10:27.756 18:19:25 -- nvmf/common.sh@155 -- # true 00:10:27.756 18:19:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:27.756 18:19:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:27.756 Cannot find device "nvmf_tgt_br" 00:10:27.756 18:19:25 -- nvmf/common.sh@157 -- # true 00:10:27.756 18:19:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:27.756 Cannot find device "nvmf_tgt_br2" 00:10:27.756 18:19:25 -- nvmf/common.sh@158 -- # true 00:10:27.756 18:19:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:27.756 18:19:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:27.756 18:19:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.756 18:19:25 -- nvmf/common.sh@161 -- # true 00:10:27.756 18:19:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.756 18:19:25 -- nvmf/common.sh@162 -- # true 00:10:27.756 18:19:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:27.756 18:19:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:27.756 18:19:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:27.756 18:19:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:27.756 18:19:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:27.756 18:19:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:27.756 18:19:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:27.756 18:19:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:27.756 18:19:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:27.756 18:19:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:27.756 18:19:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:27.756 18:19:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:27.756 18:19:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:27.757 18:19:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:27.757 18:19:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:27.757 18:19:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:27.757 18:19:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:27.757 18:19:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:27.757 18:19:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:27.757 18:19:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.015 18:19:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.015 18:19:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.015 18:19:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.015 18:19:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:28.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:10:28.015 00:10:28.015 --- 10.0.0.2 ping statistics --- 00:10:28.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.015 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:28.015 18:19:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:28.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:10:28.015 00:10:28.015 --- 10.0.0.3 ping statistics --- 00:10:28.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.015 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:28.015 18:19:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:28.015 00:10:28.015 --- 10.0.0.1 ping statistics --- 00:10:28.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.015 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:28.015 18:19:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.015 18:19:26 -- nvmf/common.sh@421 -- # return 0 00:10:28.015 18:19:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:28.015 18:19:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.015 18:19:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:28.015 18:19:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:28.015 18:19:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.015 18:19:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:28.015 18:19:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:28.015 18:19:26 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:28.015 18:19:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:28.015 18:19:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.015 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.015 18:19:26 -- nvmf/common.sh@469 -- # nvmfpid=74303 00:10:28.015 18:19:26 -- nvmf/common.sh@470 -- # waitforlisten 74303 00:10:28.015 18:19:26 -- common/autotest_common.sh@829 -- # '[' -z 74303 ']' 00:10:28.015 18:19:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:28.015 18:19:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.015 18:19:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.015 18:19:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.015 18:19:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.015 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.015 [2024-11-17 18:19:26.149739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:28.016 [2024-11-17 18:19:26.149848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.275 [2024-11-17 18:19:26.288693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.275 [2024-11-17 18:19:26.322176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:28.275 [2024-11-17 18:19:26.322337] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.275 [2024-11-17 18:19:26.322352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.275 [2024-11-17 18:19:26.322362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.275 [2024-11-17 18:19:26.322393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.275 18:19:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.275 18:19:26 -- common/autotest_common.sh@862 -- # return 0 00:10:28.275 18:19:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:28.275 18:19:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 18:19:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.275 18:19:26 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:28.275 18:19:26 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:28.275 18:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 [2024-11-17 18:19:26.440891] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.275 18:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.275 18:19:26 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:28.275 18:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 18:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.275 18:19:26 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.275 18:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 [2024-11-17 18:19:26.457018] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.275 18:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.275 18:19:26 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.275 18:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 18:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.275 18:19:26 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:28.275 18:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 malloc0 00:10:28.275 18:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.275 18:19:26 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:28.275 18:19:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.275 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:10:28.275 18:19:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.275 18:19:26 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:28.275 18:19:26 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:28.275 18:19:26 -- nvmf/common.sh@520 -- # config=() 00:10:28.275 18:19:26 -- nvmf/common.sh@520 -- # local subsystem config 00:10:28.275 18:19:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:28.275 18:19:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:28.275 { 00:10:28.275 "params": { 00:10:28.275 "name": "Nvme$subsystem", 00:10:28.275 "trtype": "$TEST_TRANSPORT", 00:10:28.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.275 "adrfam": "ipv4", 00:10:28.275 "trsvcid": "$NVMF_PORT", 00:10:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.275 "hdgst": ${hdgst:-false}, 00:10:28.275 "ddgst": ${ddgst:-false} 00:10:28.275 }, 00:10:28.275 "method": "bdev_nvme_attach_controller" 00:10:28.275 } 00:10:28.275 EOF 00:10:28.275 )") 00:10:28.275 18:19:26 -- nvmf/common.sh@542 -- # cat 00:10:28.275 18:19:26 -- nvmf/common.sh@544 -- # jq . 00:10:28.275 18:19:26 -- nvmf/common.sh@545 -- # IFS=, 00:10:28.275 18:19:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:28.275 "params": { 00:10:28.275 "name": "Nvme1", 00:10:28.275 "trtype": "tcp", 00:10:28.275 "traddr": "10.0.0.2", 00:10:28.275 "adrfam": "ipv4", 00:10:28.275 "trsvcid": "4420", 00:10:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.275 "hdgst": false, 00:10:28.275 "ddgst": false 00:10:28.275 }, 00:10:28.275 "method": "bdev_nvme_attach_controller" 00:10:28.275 }' 00:10:28.535 [2024-11-17 18:19:26.542029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:28.535 [2024-11-17 18:19:26.542126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74329 ] 00:10:28.535 [2024-11-17 18:19:26.685236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.535 [2024-11-17 18:19:26.724935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.807 Running I/O for 10 seconds... 00:10:38.786 00:10:38.786 Latency(us) 00:10:38.786 [2024-11-17T18:19:37.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.786 [2024-11-17T18:19:37.053Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:38.786 Verification LBA range: start 0x0 length 0x1000 00:10:38.786 Nvme1n1 : 10.01 9978.28 77.96 0.00 0.00 12794.57 1333.06 20852.36 00:10:38.786 [2024-11-17T18:19:37.053Z] =================================================================================================================== 00:10:38.786 [2024-11-17T18:19:37.053Z] Total : 9978.28 77.96 0.00 0.00 12794.57 1333.06 20852.36 00:10:38.786 18:19:37 -- target/zcopy.sh@39 -- # perfpid=74446 00:10:38.786 18:19:37 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:38.786 18:19:37 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:38.786 18:19:37 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:38.786 18:19:37 -- common/autotest_common.sh@10 -- # set +x 00:10:38.786 18:19:37 -- nvmf/common.sh@520 -- # config=() 00:10:38.786 18:19:37 -- nvmf/common.sh@520 -- # local subsystem config 00:10:38.786 18:19:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:38.786 18:19:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:38.786 { 00:10:38.786 "params": { 00:10:38.786 "name": "Nvme$subsystem", 00:10:38.786 "trtype": "$TEST_TRANSPORT", 00:10:38.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.786 "adrfam": "ipv4", 00:10:38.786 "trsvcid": "$NVMF_PORT", 00:10:38.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.786 "hdgst": ${hdgst:-false}, 00:10:38.786 "ddgst": ${ddgst:-false} 00:10:38.786 }, 00:10:38.786 "method": "bdev_nvme_attach_controller" 00:10:38.786 } 00:10:38.786 EOF 00:10:38.786 )") 00:10:38.786 18:19:37 -- nvmf/common.sh@542 -- # cat 00:10:38.786 [2024-11-17 18:19:37.017755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.786 [2024-11-17 18:19:37.017926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.786 18:19:37 -- nvmf/common.sh@544 -- # jq . 00:10:38.786 18:19:37 -- nvmf/common.sh@545 -- # IFS=, 00:10:38.786 18:19:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:38.786 "params": { 00:10:38.786 "name": "Nvme1", 00:10:38.786 "trtype": "tcp", 00:10:38.786 "traddr": "10.0.0.2", 00:10:38.786 "adrfam": "ipv4", 00:10:38.786 "trsvcid": "4420", 00:10:38.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.786 "hdgst": false, 00:10:38.786 "ddgst": false 00:10:38.786 }, 00:10:38.786 "method": "bdev_nvme_attach_controller" 00:10:38.786 }' 00:10:38.786 [2024-11-17 18:19:37.029691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.786 [2024-11-17 18:19:37.029855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.786 [2024-11-17 18:19:37.041703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.786 [2024-11-17 18:19:37.041867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.053697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.053856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.059803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:39.046 [2024-11-17 18:19:37.060030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74446 ] 00:10:39.046 [2024-11-17 18:19:37.065703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.065863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.077715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.077876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.089714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.089859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.101701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.101854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.113727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.113879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.125712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.125864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.137737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.137888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.149740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.149892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.161743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.161893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.173760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.173911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.185768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.185921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.197769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.197923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.198470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.046 [2024-11-17 18:19:37.209803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.210075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.221809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.222074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.231159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.046 [2024-11-17 18:19:37.233789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.233953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.245824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.246027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.257848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.258165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.269830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.269868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.281832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.281873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.293830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.293865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.046 [2024-11-17 18:19:37.305837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.046 [2024-11-17 18:19:37.305866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.317851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.317880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.329859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.329889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.341870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.341898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.353884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.353915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 Running I/O for 5 seconds... 00:10:39.306 [2024-11-17 18:19:37.372038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.372070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.387507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.387539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.404889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.404922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.421885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.421917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.438956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.439101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.455105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.455137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.473703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.473755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.487830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.487878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.503489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.503543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.519925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.519971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.535700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.535751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.547001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.547044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.306 [2024-11-17 18:19:37.563809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.306 [2024-11-17 18:19:37.564067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.578113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.578171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.593583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.593619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.611643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.611675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.626726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.626895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.643555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.643588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.660136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.660173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.677613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.677645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.695447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.695477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.711496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.711640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.722558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.722773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.738360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.738567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.754852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.755018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.771262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.771478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.787776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.787807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.804410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.804442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.566 [2024-11-17 18:19:37.821566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.566 [2024-11-17 18:19:37.821598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.838057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.838124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.855769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.855825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.871568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.871620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.889498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.889554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.905193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.905232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.922568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.922735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.939056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.939087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.955740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.955772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.973177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.973209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:37.988463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:37.988495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:38.006047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:38.006295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:38.021396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:38.021428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:38.032880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:38.033052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:38.049603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:38.049636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:38.064975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:38.065007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.825 [2024-11-17 18:19:38.082623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.825 [2024-11-17 18:19:38.082670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.097969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.098001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.108771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.108801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.124972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.125004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.141335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.141367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.159008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.159184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.173888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.174058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.190343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.190377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.207465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.207496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.224527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.224699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.240081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.240252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.257408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.257440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.274806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.274978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.290764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.290796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.308272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.308331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.323932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.323963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.085 [2024-11-17 18:19:38.335534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.085 [2024-11-17 18:19:38.335568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.352812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.352847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.367000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.367033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.382234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.382412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.393438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.393623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.410568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.410600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.426107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.426139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.443913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.443945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.344 [2024-11-17 18:19:38.459717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.344 [2024-11-17 18:19:38.459750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.477434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.477466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.492477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.492508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.503405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.503436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.519757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.519789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.536918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.536951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.551810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.551841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.560898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.560930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.577127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.577160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.345 [2024-11-17 18:19:38.595153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.345 [2024-11-17 18:19:38.595230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.611043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.611254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.627364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.627434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.645072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.645261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.661753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.661785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.677217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.677250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.694338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.694372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.712177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.712372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.727427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.727462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.739075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.739246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.755191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.755374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.771152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.771355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.788814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.788845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.806087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.806119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.821768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.821800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.839621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.839653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.604 [2024-11-17 18:19:38.854099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.604 [2024-11-17 18:19:38.854130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.869827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.869860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.887292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.887498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.902766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.902975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.918245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.918574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.929688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.929938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.946276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.946346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.962825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.962872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.979043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.979101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:38.996962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:38.997004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.011752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.012018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.020987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.021019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.036213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.036249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.045634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.045664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.060819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.060850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.076704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.076735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.094086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.094134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.109914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.109944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.864 [2024-11-17 18:19:39.120859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.864 [2024-11-17 18:19:39.120889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.135966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.135996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.146644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.146818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.162130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.162365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.178003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.178209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.194987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.195019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.211533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.211565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.228256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.228349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.244129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.244174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.261650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.261720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.277168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.277484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.294597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.294630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.303913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.303946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.317758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.317788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.332650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.332810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.344100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.344283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.360062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.360095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.123 [2024-11-17 18:19:39.377626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.123 [2024-11-17 18:19:39.377674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.394773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.394946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.411286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.411349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.427753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.427785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.443844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.443875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.462402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.462438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.476233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.476265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.492745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.492935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.509420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.509454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.526670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.526702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.541766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.541800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.553344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.553376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.569608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.569655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.585248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.585305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.602477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.602539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.617262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.617337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.382 [2024-11-17 18:19:39.634755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.382 [2024-11-17 18:19:39.634788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.648364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.648563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.665952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.665984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.681348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.681380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.692758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.692802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.708624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.708669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.724356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.724400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.742049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.742092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.756031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.756073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.772010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.772054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.789631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.789674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.807280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.807337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.823230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.823275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.839580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.839623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.855897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.855940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.872690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.872733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.887841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.887885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.641 [2024-11-17 18:19:39.899728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.641 [2024-11-17 18:19:39.899770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:39.915348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:39.915410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:39.932666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:39.932710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:39.948599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:39.948643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:39.965980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:39.966024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:39.982551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:39.982595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:39.999783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:39.999825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.016594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.016639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.032479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.032538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.049138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.049182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.064301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.064354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.075425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.075453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.091088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.091131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.107610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.107653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.123818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.123861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.141464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.141506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.900 [2024-11-17 18:19:40.157187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.900 [2024-11-17 18:19:40.157230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.173398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.173440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.184068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.184111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.199971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.200016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.216477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.216519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.232563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.232608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.249814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.249858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.265841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.265884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.283267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.283331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.300489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.300584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.316235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.316330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.332613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.332697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.350327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.350381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.366113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.366195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.382254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.382337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.400204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.400256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.159 [2024-11-17 18:19:40.415349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.159 [2024-11-17 18:19:40.415416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.425957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.426001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.441270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.441313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.457554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.457599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.474776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.474820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.490949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.490994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.509387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.509433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.523790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.523833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.539008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.539052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.550064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.550125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.566505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.566548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.581731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.581774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.597091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.597135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.614006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.614049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.631132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.631175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.648039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.648084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.663763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.663837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.419 [2024-11-17 18:19:40.681986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.419 [2024-11-17 18:19:40.682031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.678 [2024-11-17 18:19:40.696212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.696267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.712735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.712786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.729059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.729095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.746323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.746355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.763103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.763308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.778366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.778401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.793992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.794205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.810701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.810732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.827438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.827468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.844247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.844308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.861659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.861707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.877708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.877739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.895147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.895179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.910640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.910672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.679 [2024-11-17 18:19:40.927878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.679 [2024-11-17 18:19:40.927909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:40.945819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:40.945866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:40.960724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:40.960896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:40.978236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:40.978270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:40.994088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:40.994136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.011335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.011378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.028348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.028379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.045046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.045077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.061855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.061888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.076968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.077000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.087806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.087838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.105602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.105636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.119530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.119720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.136443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.136631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.151791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.938 [2024-11-17 18:19:41.151962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.938 [2024-11-17 18:19:41.169710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.939 [2024-11-17 18:19:41.169882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.939 [2024-11-17 18:19:41.184338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.939 [2024-11-17 18:19:41.184510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.939 [2024-11-17 18:19:41.201467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.939 [2024-11-17 18:19:41.201664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.215539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.215695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.232045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.232216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.247463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.247635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.258226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.258423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.274551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.274755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.290141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.290371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.307108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.307324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.323269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.323456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.338961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.339147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.356614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.356801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.372377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.372550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.388865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.388908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.406516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.406560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.422946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.422989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.439688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.439731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.198 [2024-11-17 18:19:41.456550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.198 [2024-11-17 18:19:41.456595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.470537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.470609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.486430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.486462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.502843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.502886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.519336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.519372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.536097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.536140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.552944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.552987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.569534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.569577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.587100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.587162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.601224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.601252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.616478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.616506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.633456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.633485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.651881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.651910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.666057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.666100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.681463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.681523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.699685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.699731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.458 [2024-11-17 18:19:41.713313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.458 [2024-11-17 18:19:41.713367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.729162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.729205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.746600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.746674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.763433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.763502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.780681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.780718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.796092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.796144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.805027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.805082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.820674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.820727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.840247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.840314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.856784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.856829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.872779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.872823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.881892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.881935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.897777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.897822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.915813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.915857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.930715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.930757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.946439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.946483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.962260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.962316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.718 [2024-11-17 18:19:41.979555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.718 [2024-11-17 18:19:41.979598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:41.997196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:41.997227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.012990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.013033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.029492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.029536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.045960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.046022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.064001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.064065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.078056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.078113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.093418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.093483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.111950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.112008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.126600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.126631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.135914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.135955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.152390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.152420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.170514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.170558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.185854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.185897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.203587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.203633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.218761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.987 [2024-11-17 18:19:42.218805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.987 [2024-11-17 18:19:42.229459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.988 [2024-11-17 18:19:42.229486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-17 18:19:42.246904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-17 18:19:42.246950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-17 18:19:42.260989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-17 18:19:42.261032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-17 18:19:42.277886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-17 18:19:42.277929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-17 18:19:42.292250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.254 [2024-11-17 18:19:42.292321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.254 [2024-11-17 18:19:42.308796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.308840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.325977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.326021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.342807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.342850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.360368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.360411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 00:10:44.255 Latency(us) 00:10:44.255 [2024-11-17T18:19:42.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.255 [2024-11-17T18:19:42.522Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:44.255 Nvme1n1 : 5.01 13233.95 103.39 0.00 0.00 9661.35 2323.55 18826.71 00:10:44.255 [2024-11-17T18:19:42.522Z] =================================================================================================================== 00:10:44.255 [2024-11-17T18:19:42.522Z] Total : 13233.95 103.39 0.00 0.00 9661.35 2323.55 18826.71 00:10:44.255 [2024-11-17 18:19:42.370992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.371034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.382987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.383027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.395019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.395073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.407025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.407078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.419055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.419105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.431038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.431088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.443039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.443087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.455025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.455084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.467049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.467097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.479032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.479074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.491047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.491096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 [2024-11-17 18:19:42.503024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.255 [2024-11-17 18:19:42.503061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.255 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74446) - No such process 00:10:44.255 18:19:42 -- target/zcopy.sh@49 -- # wait 74446 00:10:44.255 18:19:42 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.255 18:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.255 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:10:44.255 18:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.514 18:19:42 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:44.514 18:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.514 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:10:44.514 delay0 00:10:44.514 18:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.514 18:19:42 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:44.514 18:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.514 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:10:44.514 18:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.514 18:19:42 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:44.514 [2024-11-17 18:19:42.700369] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:51.095 Initializing NVMe Controllers 00:10:51.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:51.095 Initialization complete. Launching workers. 00:10:51.095 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 77 00:10:51.095 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 364, failed to submit 33 00:10:51.095 success 257, unsuccess 107, failed 0 00:10:51.095 18:19:48 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:51.095 18:19:48 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:51.095 18:19:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:51.095 18:19:48 -- nvmf/common.sh@116 -- # sync 00:10:51.095 18:19:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:51.096 18:19:48 -- nvmf/common.sh@119 -- # set +e 00:10:51.096 18:19:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:51.096 18:19:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:51.096 rmmod nvme_tcp 00:10:51.096 rmmod nvme_fabrics 00:10:51.096 rmmod nvme_keyring 00:10:51.096 18:19:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:51.096 18:19:48 -- nvmf/common.sh@123 -- # set -e 00:10:51.096 18:19:48 -- nvmf/common.sh@124 -- # return 0 00:10:51.096 18:19:48 -- nvmf/common.sh@477 -- # '[' -n 74303 ']' 00:10:51.096 18:19:48 -- nvmf/common.sh@478 -- # killprocess 74303 00:10:51.096 18:19:48 -- common/autotest_common.sh@936 -- # '[' -z 74303 ']' 00:10:51.096 18:19:48 -- common/autotest_common.sh@940 -- # kill -0 74303 00:10:51.096 18:19:48 -- common/autotest_common.sh@941 -- # uname 00:10:51.096 18:19:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:51.096 18:19:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74303 00:10:51.096 killing process with pid 74303 00:10:51.096 18:19:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:51.096 18:19:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:51.096 18:19:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74303' 00:10:51.096 18:19:48 -- common/autotest_common.sh@955 -- # kill 74303 00:10:51.096 18:19:48 -- common/autotest_common.sh@960 -- # wait 74303 00:10:51.096 18:19:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:51.096 18:19:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:51.096 18:19:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:51.096 18:19:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.096 18:19:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:51.096 18:19:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.096 18:19:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.096 18:19:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.096 18:19:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:51.096 00:10:51.096 real 0m23.536s 00:10:51.096 user 0m39.119s 00:10:51.096 sys 0m6.342s 00:10:51.096 18:19:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:51.096 18:19:49 -- common/autotest_common.sh@10 -- # set +x 00:10:51.096 ************************************ 00:10:51.096 END TEST nvmf_zcopy 00:10:51.096 ************************************ 00:10:51.096 18:19:49 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:51.096 18:19:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:51.096 18:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.097 18:19:49 -- common/autotest_common.sh@10 -- # set +x 00:10:51.097 ************************************ 00:10:51.097 START TEST nvmf_nmic 00:10:51.097 ************************************ 00:10:51.097 18:19:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:51.097 * Looking for test storage... 00:10:51.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.097 18:19:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:51.097 18:19:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:51.097 18:19:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:51.097 18:19:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:51.097 18:19:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:51.097 18:19:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:51.097 18:19:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:51.097 18:19:49 -- scripts/common.sh@335 -- # IFS=.-: 00:10:51.097 18:19:49 -- scripts/common.sh@335 -- # read -ra ver1 00:10:51.097 18:19:49 -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.097 18:19:49 -- scripts/common.sh@336 -- # read -ra ver2 00:10:51.097 18:19:49 -- scripts/common.sh@337 -- # local 'op=<' 00:10:51.097 18:19:49 -- scripts/common.sh@339 -- # ver1_l=2 00:10:51.097 18:19:49 -- scripts/common.sh@340 -- # ver2_l=1 00:10:51.097 18:19:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:51.097 18:19:49 -- scripts/common.sh@343 -- # case "$op" in 00:10:51.097 18:19:49 -- scripts/common.sh@344 -- # : 1 00:10:51.097 18:19:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:51.097 18:19:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.097 18:19:49 -- scripts/common.sh@364 -- # decimal 1 00:10:51.097 18:19:49 -- scripts/common.sh@352 -- # local d=1 00:10:51.097 18:19:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.097 18:19:49 -- scripts/common.sh@354 -- # echo 1 00:10:51.097 18:19:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:51.097 18:19:49 -- scripts/common.sh@365 -- # decimal 2 00:10:51.097 18:19:49 -- scripts/common.sh@352 -- # local d=2 00:10:51.097 18:19:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.097 18:19:49 -- scripts/common.sh@354 -- # echo 2 00:10:51.097 18:19:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:51.097 18:19:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:51.097 18:19:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:51.097 18:19:49 -- scripts/common.sh@367 -- # return 0 00:10:51.097 18:19:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.097 18:19:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:51.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.097 --rc genhtml_branch_coverage=1 00:10:51.097 --rc genhtml_function_coverage=1 00:10:51.097 --rc genhtml_legend=1 00:10:51.097 --rc geninfo_all_blocks=1 00:10:51.097 --rc geninfo_unexecuted_blocks=1 00:10:51.097 00:10:51.097 ' 00:10:51.097 18:19:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:51.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.097 --rc genhtml_branch_coverage=1 00:10:51.097 --rc genhtml_function_coverage=1 00:10:51.097 --rc genhtml_legend=1 00:10:51.097 --rc geninfo_all_blocks=1 00:10:51.098 --rc geninfo_unexecuted_blocks=1 00:10:51.098 00:10:51.098 ' 00:10:51.098 18:19:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:51.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.098 --rc genhtml_branch_coverage=1 00:10:51.098 --rc genhtml_function_coverage=1 00:10:51.098 --rc genhtml_legend=1 00:10:51.098 --rc geninfo_all_blocks=1 00:10:51.098 --rc geninfo_unexecuted_blocks=1 00:10:51.098 00:10:51.098 ' 00:10:51.098 18:19:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:51.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.098 --rc genhtml_branch_coverage=1 00:10:51.098 --rc genhtml_function_coverage=1 00:10:51.098 --rc genhtml_legend=1 00:10:51.098 --rc geninfo_all_blocks=1 00:10:51.098 --rc geninfo_unexecuted_blocks=1 00:10:51.098 00:10:51.098 ' 00:10:51.098 18:19:49 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.098 18:19:49 -- nvmf/common.sh@7 -- # uname -s 00:10:51.098 18:19:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.098 18:19:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.098 18:19:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.098 18:19:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.098 18:19:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.098 18:19:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.098 18:19:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.098 18:19:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.098 18:19:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.098 18:19:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.100 18:19:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:10:51.100 18:19:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:10:51.100 18:19:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.100 18:19:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.100 18:19:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.100 18:19:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.100 18:19:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.100 18:19:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.101 18:19:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.101 18:19:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 18:19:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 18:19:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 18:19:49 -- paths/export.sh@5 -- # export PATH 00:10:51.101 18:19:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 18:19:49 -- nvmf/common.sh@46 -- # : 0 00:10:51.101 18:19:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:51.101 18:19:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:51.101 18:19:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:51.101 18:19:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.101 18:19:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.101 18:19:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:51.101 18:19:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:51.101 18:19:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:51.101 18:19:49 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.101 18:19:49 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.101 18:19:49 -- target/nmic.sh@14 -- # nvmftestinit 00:10:51.101 18:19:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:51.101 18:19:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.101 18:19:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:51.101 18:19:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:51.101 18:19:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:51.101 18:19:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.101 18:19:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.101 18:19:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.361 18:19:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:51.361 18:19:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:51.361 18:19:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:51.361 18:19:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:51.361 18:19:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:51.361 18:19:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:51.361 18:19:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.361 18:19:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.361 18:19:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:51.361 18:19:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:51.361 18:19:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.361 18:19:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.361 18:19:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.361 18:19:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.361 18:19:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.361 18:19:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.361 18:19:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.361 18:19:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.361 18:19:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:51.361 18:19:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:51.361 Cannot find device "nvmf_tgt_br" 00:10:51.361 18:19:49 -- nvmf/common.sh@154 -- # true 00:10:51.361 18:19:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.361 Cannot find device "nvmf_tgt_br2" 00:10:51.361 18:19:49 -- nvmf/common.sh@155 -- # true 00:10:51.361 18:19:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:51.361 18:19:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:51.361 Cannot find device "nvmf_tgt_br" 00:10:51.361 18:19:49 -- nvmf/common.sh@157 -- # true 00:10:51.361 18:19:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:51.361 Cannot find device "nvmf_tgt_br2" 00:10:51.361 18:19:49 -- nvmf/common.sh@158 -- # true 00:10:51.361 18:19:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:51.361 18:19:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:51.361 18:19:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.361 18:19:49 -- nvmf/common.sh@161 -- # true 00:10:51.361 18:19:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.361 18:19:49 -- nvmf/common.sh@162 -- # true 00:10:51.361 18:19:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.361 18:19:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.361 18:19:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.361 18:19:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.361 18:19:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.361 18:19:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.361 18:19:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.361 18:19:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:51.361 18:19:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:51.361 18:19:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:51.361 18:19:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:51.361 18:19:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:51.361 18:19:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:51.362 18:19:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.362 18:19:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.362 18:19:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.362 18:19:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:51.362 18:19:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:51.362 18:19:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:51.362 18:19:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:51.621 18:19:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:51.621 18:19:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:51.621 18:19:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:51.621 18:19:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:51.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:51.621 00:10:51.621 --- 10.0.0.2 ping statistics --- 00:10:51.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.621 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:51.621 18:19:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:51.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:51.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:51.621 00:10:51.621 --- 10.0.0.3 ping statistics --- 00:10:51.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.621 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:51.621 18:19:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:51.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:51.621 00:10:51.621 --- 10.0.0.1 ping statistics --- 00:10:51.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.621 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:51.621 18:19:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.621 18:19:49 -- nvmf/common.sh@421 -- # return 0 00:10:51.621 18:19:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:51.621 18:19:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.621 18:19:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:51.621 18:19:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:51.621 18:19:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.621 18:19:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:51.621 18:19:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:51.621 18:19:49 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:51.621 18:19:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:51.621 18:19:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.621 18:19:49 -- common/autotest_common.sh@10 -- # set +x 00:10:51.621 18:19:49 -- nvmf/common.sh@469 -- # nvmfpid=74772 00:10:51.621 18:19:49 -- nvmf/common.sh@470 -- # waitforlisten 74772 00:10:51.621 18:19:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.621 18:19:49 -- common/autotest_common.sh@829 -- # '[' -z 74772 ']' 00:10:51.621 18:19:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.621 18:19:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.621 18:19:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.621 18:19:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.621 18:19:49 -- common/autotest_common.sh@10 -- # set +x 00:10:51.621 [2024-11-17 18:19:49.752067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:51.621 [2024-11-17 18:19:49.752176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.880 [2024-11-17 18:19:49.891822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.880 [2024-11-17 18:19:49.925525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.880 [2024-11-17 18:19:49.925690] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.880 [2024-11-17 18:19:49.925702] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.880 [2024-11-17 18:19:49.925709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.880 [2024-11-17 18:19:49.926146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.880 [2024-11-17 18:19:49.926353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.880 [2024-11-17 18:19:49.926505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.880 [2024-11-17 18:19:49.926525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.817 18:19:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.817 18:19:50 -- common/autotest_common.sh@862 -- # return 0 00:10:52.817 18:19:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:52.817 18:19:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 18:19:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.817 18:19:50 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 [2024-11-17 18:19:50.810696] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 Malloc0 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 [2024-11-17 18:19:50.864567] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:52.817 test case1: single bdev can't be used in multiple subsystems 00:10:52.817 18:19:50 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@28 -- # nmic_status=0 00:10:52.817 18:19:50 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 [2024-11-17 18:19:50.888429] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:52.817 [2024-11-17 18:19:50.888472] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:52.817 [2024-11-17 18:19:50.888485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.817 request: 00:10:52.817 { 00:10:52.817 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:52.817 "namespace": { 00:10:52.817 "bdev_name": "Malloc0" 00:10:52.817 }, 00:10:52.817 "method": "nvmf_subsystem_add_ns", 00:10:52.817 "req_id": 1 00:10:52.817 } 00:10:52.817 Got JSON-RPC error response 00:10:52.817 response: 00:10:52.817 { 00:10:52.817 "code": -32602, 00:10:52.817 "message": "Invalid parameters" 00:10:52.817 } 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@29 -- # nmic_status=1 00:10:52.817 18:19:50 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:52.817 Adding namespace failed - expected result. 00:10:52.817 18:19:50 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:52.817 test case2: host connect to nvmf target in multiple paths 00:10:52.817 18:19:50 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:52.817 18:19:50 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:52.817 18:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.817 18:19:50 -- common/autotest_common.sh@10 -- # set +x 00:10:52.817 [2024-11-17 18:19:50.900555] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:52.817 18:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.817 18:19:50 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.817 18:19:51 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:53.076 18:19:51 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.076 18:19:51 -- common/autotest_common.sh@1187 -- # local i=0 00:10:53.076 18:19:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.076 18:19:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:53.076 18:19:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:54.981 18:19:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:54.981 18:19:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:54.981 18:19:53 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.981 18:19:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:54.981 18:19:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.981 18:19:53 -- common/autotest_common.sh@1197 -- # return 0 00:10:54.981 18:19:53 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:54.982 [global] 00:10:54.982 thread=1 00:10:54.982 invalidate=1 00:10:54.982 rw=write 00:10:54.982 time_based=1 00:10:54.982 runtime=1 00:10:54.982 ioengine=libaio 00:10:54.982 direct=1 00:10:54.982 bs=4096 00:10:54.982 iodepth=1 00:10:54.982 norandommap=0 00:10:54.982 numjobs=1 00:10:54.982 00:10:54.982 verify_dump=1 00:10:54.982 verify_backlog=512 00:10:54.982 verify_state_save=0 00:10:54.982 do_verify=1 00:10:54.982 verify=crc32c-intel 00:10:54.982 [job0] 00:10:54.982 filename=/dev/nvme0n1 00:10:54.982 Could not set queue depth (nvme0n1) 00:10:55.240 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.240 fio-3.35 00:10:55.240 Starting 1 thread 00:10:56.619 00:10:56.619 job0: (groupid=0, jobs=1): err= 0: pid=74864: Sun Nov 17 18:19:54 2024 00:10:56.619 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:56.619 slat (nsec): min=11413, max=72474, avg=14094.37, stdev=4948.35 00:10:56.619 clat (usec): min=125, max=318, avg=173.43, stdev=23.92 00:10:56.619 lat (usec): min=137, max=345, avg=187.52, stdev=24.95 00:10:56.619 clat percentiles (usec): 00:10:56.619 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:56.619 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:10:56.619 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 219], 00:10:56.619 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 297], 99.95th=[ 318], 00:10:56.619 | 99.99th=[ 318] 00:10:56.619 write: IOPS=3262, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:10:56.619 slat (nsec): min=13619, max=97415, avg=21536.64, stdev=7009.23 00:10:56.619 clat (usec): min=77, max=230, avg=105.04, stdev=17.61 00:10:56.619 lat (usec): min=94, max=280, avg=126.57, stdev=19.98 00:10:56.619 clat percentiles (usec): 00:10:56.619 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 91], 00:10:56.619 | 30.00th=[ 95], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 104], 00:10:56.619 | 70.00th=[ 111], 80.00th=[ 119], 90.00th=[ 130], 95.00th=[ 141], 00:10:56.619 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 206], 00:10:56.619 | 99.99th=[ 231] 00:10:56.619 bw ( KiB/s): min=12424, max=12424, per=95.20%, avg=12424.00, stdev= 0.00, samples=1 00:10:56.619 iops : min= 3106, max= 3106, avg=3106.00, stdev= 0.00, samples=1 00:10:56.619 lat (usec) : 100=24.69%, 250=75.04%, 500=0.27% 00:10:56.619 cpu : usr=2.80%, sys=8.20%, ctx=6338, majf=0, minf=5 00:10:56.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.619 issued rwts: total=3072,3266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.619 00:10:56.619 Run status group 0 (all jobs): 00:10:56.619 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:56.619 WRITE: bw=12.7MiB/s (13.4MB/s), 12.7MiB/s-12.7MiB/s (13.4MB/s-13.4MB/s), io=12.8MiB (13.4MB), run=1001-1001msec 00:10:56.619 00:10:56.619 Disk stats (read/write): 00:10:56.619 nvme0n1: ios=2698/3072, merge=0/0, ticks=492/339, in_queue=831, util=91.28% 00:10:56.619 18:19:54 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:56.619 18:19:54 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.619 18:19:54 -- common/autotest_common.sh@1208 -- # local i=0 00:10:56.619 18:19:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:56.619 18:19:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.619 18:19:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:56.619 18:19:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.619 18:19:54 -- common/autotest_common.sh@1220 -- # return 0 00:10:56.619 18:19:54 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:56.619 18:19:54 -- target/nmic.sh@53 -- # nvmftestfini 00:10:56.619 18:19:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:56.619 18:19:54 -- nvmf/common.sh@116 -- # sync 00:10:56.619 18:19:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:56.619 18:19:54 -- nvmf/common.sh@119 -- # set +e 00:10:56.619 18:19:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:56.619 18:19:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:56.619 rmmod nvme_tcp 00:10:56.619 rmmod nvme_fabrics 00:10:56.619 rmmod nvme_keyring 00:10:56.619 18:19:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:56.619 18:19:54 -- nvmf/common.sh@123 -- # set -e 00:10:56.619 18:19:54 -- nvmf/common.sh@124 -- # return 0 00:10:56.619 18:19:54 -- nvmf/common.sh@477 -- # '[' -n 74772 ']' 00:10:56.619 18:19:54 -- nvmf/common.sh@478 -- # killprocess 74772 00:10:56.620 18:19:54 -- common/autotest_common.sh@936 -- # '[' -z 74772 ']' 00:10:56.620 18:19:54 -- common/autotest_common.sh@940 -- # kill -0 74772 00:10:56.620 18:19:54 -- common/autotest_common.sh@941 -- # uname 00:10:56.620 18:19:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:56.620 18:19:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74772 00:10:56.620 18:19:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:56.620 18:19:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:56.620 killing process with pid 74772 00:10:56.620 18:19:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74772' 00:10:56.620 18:19:54 -- common/autotest_common.sh@955 -- # kill 74772 00:10:56.620 18:19:54 -- common/autotest_common.sh@960 -- # wait 74772 00:10:56.879 18:19:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:56.879 18:19:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:56.879 18:19:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:56.879 18:19:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.879 18:19:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:56.879 18:19:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.879 18:19:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.879 18:19:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.879 18:19:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:56.879 00:10:56.879 real 0m5.825s 00:10:56.879 user 0m18.863s 00:10:56.879 sys 0m2.243s 00:10:56.879 18:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:56.879 ************************************ 00:10:56.879 END TEST nvmf_nmic 00:10:56.879 18:19:54 -- common/autotest_common.sh@10 -- # set +x 00:10:56.879 ************************************ 00:10:56.879 18:19:55 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.879 18:19:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:56.879 18:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.879 18:19:55 -- common/autotest_common.sh@10 -- # set +x 00:10:56.879 ************************************ 00:10:56.879 START TEST nvmf_fio_target 00:10:56.879 ************************************ 00:10:56.879 18:19:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.879 * Looking for test storage... 00:10:56.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.879 18:19:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:56.879 18:19:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:56.879 18:19:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:57.139 18:19:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:57.139 18:19:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:57.139 18:19:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:57.139 18:19:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:57.139 18:19:55 -- scripts/common.sh@335 -- # IFS=.-: 00:10:57.139 18:19:55 -- scripts/common.sh@335 -- # read -ra ver1 00:10:57.139 18:19:55 -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.139 18:19:55 -- scripts/common.sh@336 -- # read -ra ver2 00:10:57.139 18:19:55 -- scripts/common.sh@337 -- # local 'op=<' 00:10:57.139 18:19:55 -- scripts/common.sh@339 -- # ver1_l=2 00:10:57.139 18:19:55 -- scripts/common.sh@340 -- # ver2_l=1 00:10:57.139 18:19:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:57.139 18:19:55 -- scripts/common.sh@343 -- # case "$op" in 00:10:57.139 18:19:55 -- scripts/common.sh@344 -- # : 1 00:10:57.139 18:19:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:57.139 18:19:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.139 18:19:55 -- scripts/common.sh@364 -- # decimal 1 00:10:57.139 18:19:55 -- scripts/common.sh@352 -- # local d=1 00:10:57.139 18:19:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.139 18:19:55 -- scripts/common.sh@354 -- # echo 1 00:10:57.139 18:19:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:57.139 18:19:55 -- scripts/common.sh@365 -- # decimal 2 00:10:57.139 18:19:55 -- scripts/common.sh@352 -- # local d=2 00:10:57.139 18:19:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.139 18:19:55 -- scripts/common.sh@354 -- # echo 2 00:10:57.139 18:19:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:57.139 18:19:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:57.140 18:19:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:57.140 18:19:55 -- scripts/common.sh@367 -- # return 0 00:10:57.140 18:19:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.140 18:19:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.140 --rc genhtml_branch_coverage=1 00:10:57.140 --rc genhtml_function_coverage=1 00:10:57.140 --rc genhtml_legend=1 00:10:57.140 --rc geninfo_all_blocks=1 00:10:57.140 --rc geninfo_unexecuted_blocks=1 00:10:57.140 00:10:57.140 ' 00:10:57.140 18:19:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.140 --rc genhtml_branch_coverage=1 00:10:57.140 --rc genhtml_function_coverage=1 00:10:57.140 --rc genhtml_legend=1 00:10:57.140 --rc geninfo_all_blocks=1 00:10:57.140 --rc geninfo_unexecuted_blocks=1 00:10:57.140 00:10:57.140 ' 00:10:57.140 18:19:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.140 --rc genhtml_branch_coverage=1 00:10:57.140 --rc genhtml_function_coverage=1 00:10:57.140 --rc genhtml_legend=1 00:10:57.140 --rc geninfo_all_blocks=1 00:10:57.140 --rc geninfo_unexecuted_blocks=1 00:10:57.140 00:10:57.140 ' 00:10:57.140 18:19:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:57.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.140 --rc genhtml_branch_coverage=1 00:10:57.140 --rc genhtml_function_coverage=1 00:10:57.140 --rc genhtml_legend=1 00:10:57.140 --rc geninfo_all_blocks=1 00:10:57.140 --rc geninfo_unexecuted_blocks=1 00:10:57.140 00:10:57.140 ' 00:10:57.140 18:19:55 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.140 18:19:55 -- nvmf/common.sh@7 -- # uname -s 00:10:57.140 18:19:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.140 18:19:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.140 18:19:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.140 18:19:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.140 18:19:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.140 18:19:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.140 18:19:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.140 18:19:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.140 18:19:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.140 18:19:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.140 18:19:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:10:57.140 18:19:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:10:57.140 18:19:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.140 18:19:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.140 18:19:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.140 18:19:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.140 18:19:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.140 18:19:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.140 18:19:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.140 18:19:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.140 18:19:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.140 18:19:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.140 18:19:55 -- paths/export.sh@5 -- # export PATH 00:10:57.140 18:19:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.140 18:19:55 -- nvmf/common.sh@46 -- # : 0 00:10:57.140 18:19:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:57.140 18:19:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:57.140 18:19:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:57.140 18:19:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.140 18:19:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.140 18:19:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:57.140 18:19:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:57.140 18:19:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:57.140 18:19:55 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.140 18:19:55 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.140 18:19:55 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.140 18:19:55 -- target/fio.sh@16 -- # nvmftestinit 00:10:57.140 18:19:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:57.140 18:19:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.140 18:19:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:57.140 18:19:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:57.140 18:19:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:57.140 18:19:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.140 18:19:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.140 18:19:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.140 18:19:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:57.140 18:19:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:57.140 18:19:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:57.140 18:19:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:57.140 18:19:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:57.140 18:19:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:57.140 18:19:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.140 18:19:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.140 18:19:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:57.140 18:19:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:57.140 18:19:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.140 18:19:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.140 18:19:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.140 18:19:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.140 18:19:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.140 18:19:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.140 18:19:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.140 18:19:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.140 18:19:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:57.140 18:19:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:57.140 Cannot find device "nvmf_tgt_br" 00:10:57.140 18:19:55 -- nvmf/common.sh@154 -- # true 00:10:57.140 18:19:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.140 Cannot find device "nvmf_tgt_br2" 00:10:57.140 18:19:55 -- nvmf/common.sh@155 -- # true 00:10:57.140 18:19:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:57.140 18:19:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:57.140 Cannot find device "nvmf_tgt_br" 00:10:57.140 18:19:55 -- nvmf/common.sh@157 -- # true 00:10:57.140 18:19:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:57.140 Cannot find device "nvmf_tgt_br2" 00:10:57.140 18:19:55 -- nvmf/common.sh@158 -- # true 00:10:57.140 18:19:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:57.140 18:19:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:57.140 18:19:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.140 18:19:55 -- nvmf/common.sh@161 -- # true 00:10:57.140 18:19:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.140 18:19:55 -- nvmf/common.sh@162 -- # true 00:10:57.140 18:19:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.140 18:19:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.140 18:19:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.140 18:19:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.399 18:19:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.399 18:19:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.399 18:19:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.399 18:19:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:57.399 18:19:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:57.399 18:19:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:57.399 18:19:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:57.399 18:19:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:57.399 18:19:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:57.399 18:19:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.399 18:19:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.399 18:19:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.399 18:19:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:57.399 18:19:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:57.399 18:19:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.399 18:19:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.399 18:19:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.399 18:19:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.399 18:19:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:57.399 18:19:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:57.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:10:57.399 00:10:57.399 --- 10.0.0.2 ping statistics --- 00:10:57.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.399 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:57.399 18:19:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:57.399 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:57.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:57.400 00:10:57.400 --- 10.0.0.3 ping statistics --- 00:10:57.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.400 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:57.400 18:19:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:57.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:57.400 00:10:57.400 --- 10.0.0.1 ping statistics --- 00:10:57.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.400 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:57.400 18:19:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.400 18:19:55 -- nvmf/common.sh@421 -- # return 0 00:10:57.400 18:19:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:57.400 18:19:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.400 18:19:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:57.400 18:19:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:57.400 18:19:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.400 18:19:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:57.400 18:19:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:57.400 18:19:55 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.400 18:19:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:57.400 18:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:57.400 18:19:55 -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 18:19:55 -- nvmf/common.sh@469 -- # nvmfpid=75048 00:10:57.400 18:19:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.400 18:19:55 -- nvmf/common.sh@470 -- # waitforlisten 75048 00:10:57.400 18:19:55 -- common/autotest_common.sh@829 -- # '[' -z 75048 ']' 00:10:57.400 18:19:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.400 18:19:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.400 18:19:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.400 18:19:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.400 18:19:55 -- common/autotest_common.sh@10 -- # set +x 00:10:57.400 [2024-11-17 18:19:55.658609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:57.400 [2024-11-17 18:19:55.658730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.659 [2024-11-17 18:19:55.793227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.659 [2024-11-17 18:19:55.827623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:57.659 [2024-11-17 18:19:55.827823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.659 [2024-11-17 18:19:55.827837] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.659 [2024-11-17 18:19:55.827845] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.659 [2024-11-17 18:19:55.828015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.659 [2024-11-17 18:19:55.831428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.659 [2024-11-17 18:19:55.831540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.659 [2024-11-17 18:19:55.831546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.598 18:19:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.598 18:19:56 -- common/autotest_common.sh@862 -- # return 0 00:10:58.598 18:19:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:58.598 18:19:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.598 18:19:56 -- common/autotest_common.sh@10 -- # set +x 00:10:58.598 18:19:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.598 18:19:56 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.857 [2024-11-17 18:19:56.916268] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.857 18:19:56 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.116 18:19:57 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:59.116 18:19:57 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.375 18:19:57 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:59.375 18:19:57 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.634 18:19:57 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:59.634 18:19:57 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.893 18:19:58 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:59.893 18:19:58 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:00.152 18:19:58 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.411 18:19:58 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:00.411 18:19:58 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.671 18:19:58 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:00.671 18:19:58 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.930 18:19:59 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:00.930 18:19:59 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:01.214 18:19:59 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.479 18:19:59 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:01.479 18:19:59 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.738 18:19:59 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:01.738 18:19:59 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:01.738 18:19:59 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.997 [2024-11-17 18:20:00.202724] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.997 18:20:00 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:02.255 18:20:00 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:02.515 18:20:00 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.774 18:20:00 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.774 18:20:00 -- common/autotest_common.sh@1187 -- # local i=0 00:11:02.774 18:20:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.774 18:20:00 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:11:02.774 18:20:00 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:11:02.774 18:20:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:04.679 18:20:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:04.679 18:20:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:04.679 18:20:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.679 18:20:02 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:11:04.679 18:20:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.679 18:20:02 -- common/autotest_common.sh@1197 -- # return 0 00:11:04.679 18:20:02 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.679 [global] 00:11:04.679 thread=1 00:11:04.679 invalidate=1 00:11:04.679 rw=write 00:11:04.679 time_based=1 00:11:04.679 runtime=1 00:11:04.679 ioengine=libaio 00:11:04.679 direct=1 00:11:04.679 bs=4096 00:11:04.679 iodepth=1 00:11:04.679 norandommap=0 00:11:04.679 numjobs=1 00:11:04.679 00:11:04.679 verify_dump=1 00:11:04.679 verify_backlog=512 00:11:04.679 verify_state_save=0 00:11:04.679 do_verify=1 00:11:04.679 verify=crc32c-intel 00:11:04.679 [job0] 00:11:04.679 filename=/dev/nvme0n1 00:11:04.679 [job1] 00:11:04.679 filename=/dev/nvme0n2 00:11:04.679 [job2] 00:11:04.679 filename=/dev/nvme0n3 00:11:04.679 [job3] 00:11:04.679 filename=/dev/nvme0n4 00:11:04.938 Could not set queue depth (nvme0n1) 00:11:04.938 Could not set queue depth (nvme0n2) 00:11:04.938 Could not set queue depth (nvme0n3) 00:11:04.938 Could not set queue depth (nvme0n4) 00:11:04.938 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.938 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.938 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.938 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.938 fio-3.35 00:11:04.938 Starting 4 threads 00:11:06.314 00:11:06.314 job0: (groupid=0, jobs=1): err= 0: pid=75232: Sun Nov 17 18:20:04 2024 00:11:06.314 read: IOPS=1782, BW=7129KiB/s (7300kB/s)(7136KiB/1001msec) 00:11:06.314 slat (nsec): min=14693, max=46045, avg=17542.91, stdev=3210.28 00:11:06.314 clat (usec): min=167, max=562, avg=264.10, stdev=36.82 00:11:06.314 lat (usec): min=185, max=586, avg=281.64, stdev=37.83 00:11:06.314 clat percentiles (usec): 00:11:06.314 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:11:06.314 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:11:06.314 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 314], 00:11:06.314 | 99.00th=[ 469], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 562], 00:11:06.314 | 99.99th=[ 562] 00:11:06.314 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:06.314 slat (nsec): min=19068, max=97427, avg=27058.16, stdev=7065.18 00:11:06.314 clat (usec): min=92, max=1976, avg=211.82, stdev=52.98 00:11:06.314 lat (usec): min=115, max=2003, avg=238.87, stdev=54.55 00:11:06.314 clat percentiles (usec): 00:11:06.314 | 1.00th=[ 117], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 188], 00:11:06.314 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:11:06.314 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 269], 00:11:06.314 | 99.00th=[ 306], 99.50th=[ 392], 99.90th=[ 486], 99.95th=[ 562], 00:11:06.314 | 99.99th=[ 1975] 00:11:06.314 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:06.314 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:06.314 lat (usec) : 100=0.08%, 250=63.70%, 500=35.78%, 750=0.42% 00:11:06.314 lat (msec) : 2=0.03% 00:11:06.314 cpu : usr=1.20%, sys=7.40%, ctx=3832, majf=0, minf=13 00:11:06.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.314 issued rwts: total=1784,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.315 job1: (groupid=0, jobs=1): err= 0: pid=75233: Sun Nov 17 18:20:04 2024 00:11:06.315 read: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec) 00:11:06.315 slat (nsec): min=11573, max=42530, avg=14723.42, stdev=2858.11 00:11:06.315 clat (usec): min=126, max=658, avg=164.63, stdev=19.07 00:11:06.315 lat (usec): min=139, max=671, avg=179.35, stdev=19.29 00:11:06.315 clat percentiles (usec): 00:11:06.315 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:11:06.315 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:11:06.315 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:11:06.315 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 586], 00:11:06.315 | 99.99th=[ 660] 00:11:06.315 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:06.315 slat (usec): min=15, max=101, avg=23.50, stdev= 6.83 00:11:06.315 clat (usec): min=90, max=203, avg=122.50, stdev=13.26 00:11:06.315 lat (usec): min=110, max=305, avg=146.00, stdev=15.31 00:11:06.315 clat percentiles (usec): 00:11:06.315 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 113], 00:11:06.315 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:11:06.315 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:11:06.315 | 99.00th=[ 161], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 202], 00:11:06.315 | 99.99th=[ 204] 00:11:06.315 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:06.315 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:06.315 lat (usec) : 100=0.97%, 250=98.98%, 500=0.02%, 750=0.03% 00:11:06.315 cpu : usr=2.90%, sys=8.60%, ctx=6082, majf=0, minf=3 00:11:06.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.315 issued rwts: total=3010,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.315 job2: (groupid=0, jobs=1): err= 0: pid=75234: Sun Nov 17 18:20:04 2024 00:11:06.315 read: IOPS=1769, BW=7077KiB/s (7247kB/s)(7084KiB/1001msec) 00:11:06.315 slat (nsec): min=13228, max=61650, avg=16564.37, stdev=4359.66 00:11:06.315 clat (usec): min=176, max=633, avg=261.74, stdev=24.55 00:11:06.315 lat (usec): min=198, max=651, avg=278.31, stdev=25.38 00:11:06.315 clat percentiles (usec): 00:11:06.315 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 245], 00:11:06.315 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:11:06.315 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:11:06.315 | 99.00th=[ 343], 99.50th=[ 371], 99.90th=[ 437], 99.95th=[ 635], 00:11:06.315 | 99.99th=[ 635] 00:11:06.315 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:06.315 slat (nsec): min=18199, max=85814, avg=27024.32, stdev=6362.84 00:11:06.315 clat (usec): min=116, max=674, avg=216.59, stdev=43.28 00:11:06.315 lat (usec): min=136, max=699, avg=243.62, stdev=45.96 00:11:06.315 clat percentiles (usec): 00:11:06.315 | 1.00th=[ 129], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 192], 00:11:06.315 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:11:06.315 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 258], 95.00th=[ 285], 00:11:06.315 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 545], 99.95th=[ 586], 00:11:06.315 | 99.99th=[ 676] 00:11:06.315 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:06.315 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:06.315 lat (usec) : 250=60.62%, 500=39.25%, 750=0.13% 00:11:06.315 cpu : usr=1.40%, sys=6.90%, ctx=3821, majf=0, minf=11 00:11:06.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.315 issued rwts: total=1771,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.315 job3: (groupid=0, jobs=1): err= 0: pid=75235: Sun Nov 17 18:20:04 2024 00:11:06.315 read: IOPS=2716, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:11:06.315 slat (nsec): min=11495, max=40094, avg=14277.54, stdev=2891.53 00:11:06.315 clat (usec): min=135, max=248, avg=174.40, stdev=15.66 00:11:06.315 lat (usec): min=148, max=266, avg=188.68, stdev=15.94 00:11:06.315 clat percentiles (usec): 00:11:06.315 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:06.315 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:11:06.315 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:11:06.315 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 241], 99.95th=[ 243], 00:11:06.315 | 99.99th=[ 249] 00:11:06.315 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:06.315 slat (nsec): min=14881, max=90458, avg=23684.58, stdev=7606.94 00:11:06.315 clat (usec): min=97, max=273, avg=131.45, stdev=14.77 00:11:06.315 lat (usec): min=119, max=310, avg=155.13, stdev=16.99 00:11:06.315 clat percentiles (usec): 00:11:06.315 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 120], 00:11:06.315 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:11:06.315 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 159], 00:11:06.315 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 210], 00:11:06.315 | 99.99th=[ 273] 00:11:06.315 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:06.315 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:06.315 lat (usec) : 100=0.02%, 250=99.97%, 500=0.02% 00:11:06.315 cpu : usr=2.40%, sys=8.60%, ctx=5791, majf=0, minf=15 00:11:06.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.315 issued rwts: total=2719,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.315 00:11:06.315 Run status group 0 (all jobs): 00:11:06.315 READ: bw=36.2MiB/s (38.0MB/s), 7077KiB/s-11.7MiB/s (7247kB/s-12.3MB/s), io=36.3MiB (38.0MB), run=1001-1001msec 00:11:06.315 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:11:06.315 00:11:06.315 Disk stats (read/write): 00:11:06.315 nvme0n1: ios=1586/1684, merge=0/0, ticks=438/387, in_queue=825, util=85.97% 00:11:06.315 nvme0n2: ios=2589/2563, merge=0/0, ticks=460/333, in_queue=793, util=87.50% 00:11:06.315 nvme0n3: ios=1536/1650, merge=0/0, ticks=413/384, in_queue=797, util=88.95% 00:11:06.315 nvme0n4: ios=2315/2560, merge=0/0, ticks=418/362, in_queue=780, util=89.61% 00:11:06.315 18:20:04 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:06.315 [global] 00:11:06.315 thread=1 00:11:06.315 invalidate=1 00:11:06.315 rw=randwrite 00:11:06.315 time_based=1 00:11:06.315 runtime=1 00:11:06.315 ioengine=libaio 00:11:06.315 direct=1 00:11:06.315 bs=4096 00:11:06.315 iodepth=1 00:11:06.315 norandommap=0 00:11:06.315 numjobs=1 00:11:06.315 00:11:06.315 verify_dump=1 00:11:06.315 verify_backlog=512 00:11:06.315 verify_state_save=0 00:11:06.315 do_verify=1 00:11:06.315 verify=crc32c-intel 00:11:06.315 [job0] 00:11:06.315 filename=/dev/nvme0n1 00:11:06.315 [job1] 00:11:06.315 filename=/dev/nvme0n2 00:11:06.315 [job2] 00:11:06.315 filename=/dev/nvme0n3 00:11:06.315 [job3] 00:11:06.315 filename=/dev/nvme0n4 00:11:06.315 Could not set queue depth (nvme0n1) 00:11:06.315 Could not set queue depth (nvme0n2) 00:11:06.315 Could not set queue depth (nvme0n3) 00:11:06.315 Could not set queue depth (nvme0n4) 00:11:06.315 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.315 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.316 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.316 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.316 fio-3.35 00:11:06.316 Starting 4 threads 00:11:07.692 00:11:07.692 job0: (groupid=0, jobs=1): err= 0: pid=75294: Sun Nov 17 18:20:05 2024 00:11:07.692 read: IOPS=1758, BW=7033KiB/s (7202kB/s)(7040KiB/1001msec) 00:11:07.692 slat (nsec): min=11614, max=58803, avg=14761.39, stdev=3694.61 00:11:07.692 clat (usec): min=168, max=733, avg=273.95, stdev=39.16 00:11:07.692 lat (usec): min=193, max=748, avg=288.71, stdev=39.67 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 251], 00:11:07.692 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:11:07.692 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 326], 00:11:07.692 | 99.00th=[ 445], 99.50th=[ 490], 99.90th=[ 578], 99.95th=[ 734], 00:11:07.692 | 99.99th=[ 734] 00:11:07.692 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:07.692 slat (usec): min=17, max=119, avg=24.51, stdev= 7.80 00:11:07.692 clat (usec): min=99, max=1929, avg=212.03, stdev=65.34 00:11:07.692 lat (usec): min=120, max=2006, avg=236.54, stdev=68.25 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 114], 5.00th=[ 128], 10.00th=[ 167], 20.00th=[ 186], 00:11:07.692 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:11:07.692 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 347], 00:11:07.692 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 537], 99.95th=[ 545], 00:11:07.692 | 99.99th=[ 1926] 00:11:07.692 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.692 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.692 lat (usec) : 100=0.03%, 250=57.51%, 500=42.20%, 750=0.24% 00:11:07.692 lat (msec) : 2=0.03% 00:11:07.692 cpu : usr=1.90%, sys=5.80%, ctx=3811, majf=0, minf=19 00:11:07.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.692 issued rwts: total=1760,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.692 job1: (groupid=0, jobs=1): err= 0: pid=75295: Sun Nov 17 18:20:05 2024 00:11:07.692 read: IOPS=2985, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:11:07.692 slat (nsec): min=11175, max=44458, avg=13533.48, stdev=3475.60 00:11:07.692 clat (usec): min=125, max=782, avg=168.69, stdev=23.32 00:11:07.692 lat (usec): min=137, max=795, avg=182.22, stdev=23.46 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:11:07.692 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:07.692 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 200], 00:11:07.692 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 506], 99.95th=[ 766], 00:11:07.692 | 99.99th=[ 783] 00:11:07.692 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:07.692 slat (usec): min=17, max=101, avg=20.37, stdev= 5.04 00:11:07.692 clat (usec): min=93, max=232, avg=124.70, stdev=13.76 00:11:07.692 lat (usec): min=111, max=333, avg=145.07, stdev=14.87 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:11:07.692 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:11:07.692 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 151], 00:11:07.692 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 200], 99.95th=[ 204], 00:11:07.692 | 99.99th=[ 233] 00:11:07.692 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:07.692 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:07.692 lat (usec) : 100=0.50%, 250=99.44%, 500=0.02%, 750=0.02%, 1000=0.03% 00:11:07.692 cpu : usr=2.30%, sys=7.90%, ctx=6060, majf=0, minf=11 00:11:07.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.692 issued rwts: total=2988,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.692 job2: (groupid=0, jobs=1): err= 0: pid=75296: Sun Nov 17 18:20:05 2024 00:11:07.692 read: IOPS=2721, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:11:07.692 slat (nsec): min=10683, max=45047, avg=13334.34, stdev=3199.06 00:11:07.692 clat (usec): min=131, max=2634, avg=174.36, stdev=50.09 00:11:07.692 lat (usec): min=143, max=2647, avg=187.70, stdev=50.25 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:11:07.692 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:11:07.692 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:11:07.692 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 245], 99.95th=[ 260], 00:11:07.692 | 99.99th=[ 2638] 00:11:07.692 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:07.692 slat (nsec): min=14228, max=91369, avg=21204.87, stdev=4752.08 00:11:07.692 clat (usec): min=93, max=402, avg=134.58, stdev=15.50 00:11:07.692 lat (usec): min=113, max=431, avg=155.79, stdev=16.25 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 106], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:11:07.692 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:11:07.692 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 161], 00:11:07.692 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 202], 99.95th=[ 239], 00:11:07.692 | 99.99th=[ 404] 00:11:07.692 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:07.692 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:07.692 lat (usec) : 100=0.07%, 250=99.88%, 500=0.03% 00:11:07.692 lat (msec) : 4=0.02% 00:11:07.692 cpu : usr=2.20%, sys=8.00%, ctx=5798, majf=0, minf=5 00:11:07.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.692 issued rwts: total=2724,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.692 job3: (groupid=0, jobs=1): err= 0: pid=75297: Sun Nov 17 18:20:05 2024 00:11:07.692 read: IOPS=1802, BW=7209KiB/s (7382kB/s)(7216KiB/1001msec) 00:11:07.692 slat (nsec): min=11992, max=47866, avg=16102.95, stdev=4260.71 00:11:07.692 clat (usec): min=178, max=1229, avg=278.65, stdev=56.90 00:11:07.692 lat (usec): min=194, max=1247, avg=294.75, stdev=58.26 00:11:07.692 clat percentiles (usec): 00:11:07.692 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:11:07.692 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:11:07.692 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 392], 00:11:07.692 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 578], 99.95th=[ 1237], 00:11:07.692 | 99.99th=[ 1237] 00:11:07.693 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:07.693 slat (nsec): min=14826, max=99547, avg=23073.29, stdev=5954.11 00:11:07.693 clat (usec): min=102, max=555, avg=202.06, stdev=35.00 00:11:07.693 lat (usec): min=121, max=577, avg=225.13, stdev=35.97 00:11:07.693 clat percentiles (usec): 00:11:07.693 | 1.00th=[ 119], 5.00th=[ 137], 10.00th=[ 165], 20.00th=[ 182], 00:11:07.693 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:11:07.693 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 245], 00:11:07.693 | 99.00th=[ 277], 99.50th=[ 388], 99.90th=[ 490], 99.95th=[ 519], 00:11:07.693 | 99.99th=[ 553] 00:11:07.693 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.693 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.693 lat (usec) : 250=61.66%, 500=37.56%, 750=0.75% 00:11:07.693 lat (msec) : 2=0.03% 00:11:07.693 cpu : usr=1.50%, sys=6.00%, ctx=3852, majf=0, minf=11 00:11:07.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.693 issued rwts: total=1804,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.693 00:11:07.693 Run status group 0 (all jobs): 00:11:07.693 READ: bw=36.2MiB/s (38.0MB/s), 7033KiB/s-11.7MiB/s (7202kB/s-12.2MB/s), io=36.2MiB (38.0MB), run=1001-1001msec 00:11:07.693 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:11:07.693 00:11:07.693 Disk stats (read/write): 00:11:07.693 nvme0n1: ios=1586/1706, merge=0/0, ticks=449/385, in_queue=834, util=87.78% 00:11:07.693 nvme0n2: ios=2606/2673, merge=0/0, ticks=479/367, in_queue=846, util=88.57% 00:11:07.693 nvme0n3: ios=2416/2560, merge=0/0, ticks=430/366, in_queue=796, util=89.16% 00:11:07.693 nvme0n4: ios=1536/1828, merge=0/0, ticks=439/386, in_queue=825, util=89.72% 00:11:07.693 18:20:05 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:07.693 [global] 00:11:07.693 thread=1 00:11:07.693 invalidate=1 00:11:07.693 rw=write 00:11:07.693 time_based=1 00:11:07.693 runtime=1 00:11:07.693 ioengine=libaio 00:11:07.693 direct=1 00:11:07.693 bs=4096 00:11:07.693 iodepth=128 00:11:07.693 norandommap=0 00:11:07.693 numjobs=1 00:11:07.693 00:11:07.693 verify_dump=1 00:11:07.693 verify_backlog=512 00:11:07.693 verify_state_save=0 00:11:07.693 do_verify=1 00:11:07.693 verify=crc32c-intel 00:11:07.693 [job0] 00:11:07.693 filename=/dev/nvme0n1 00:11:07.693 [job1] 00:11:07.693 filename=/dev/nvme0n2 00:11:07.693 [job2] 00:11:07.693 filename=/dev/nvme0n3 00:11:07.693 [job3] 00:11:07.693 filename=/dev/nvme0n4 00:11:07.693 Could not set queue depth (nvme0n1) 00:11:07.693 Could not set queue depth (nvme0n2) 00:11:07.693 Could not set queue depth (nvme0n3) 00:11:07.693 Could not set queue depth (nvme0n4) 00:11:07.693 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.693 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.693 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.693 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.693 fio-3.35 00:11:07.693 Starting 4 threads 00:11:09.070 00:11:09.070 job0: (groupid=0, jobs=1): err= 0: pid=75357: Sun Nov 17 18:20:06 2024 00:11:09.070 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:11:09.070 slat (usec): min=8, max=5805, avg=172.55, stdev=886.47 00:11:09.070 clat (usec): min=16547, max=24768, avg=22747.80, stdev=1080.54 00:11:09.070 lat (usec): min=21444, max=24780, avg=22920.35, stdev=604.18 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[17433], 5.00th=[21627], 10.00th=[22152], 20.00th=[22414], 00:11:09.070 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22676], 60.00th=[22938], 00:11:09.070 | 70.00th=[22938], 80.00th=[23462], 90.00th=[23725], 95.00th=[23987], 00:11:09.070 | 99.00th=[24511], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:11:09.070 | 99.99th=[24773] 00:11:09.070 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1003msec); 0 zone resets 00:11:09.070 slat (usec): min=11, max=5764, avg=179.95, stdev=879.33 00:11:09.070 clat (usec): min=185, max=25240, avg=22618.00, stdev=2600.67 00:11:09.070 lat (usec): min=4231, max=25307, avg=22797.95, stdev=2453.32 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[ 5014], 5.00th=[18220], 10.00th=[22152], 20.00th=[22414], 00:11:09.070 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:11:09.070 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:11:09.070 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:11:09.070 | 99.99th=[25297] 00:11:09.070 bw ( KiB/s): min=10760, max=12288, per=16.93%, avg=11524.00, stdev=1080.46, samples=2 00:11:09.070 iops : min= 2690, max= 3072, avg=2881.00, stdev=270.11, samples=2 00:11:09.070 lat (usec) : 250=0.02% 00:11:09.070 lat (msec) : 10=0.57%, 20=4.20%, 50=95.21% 00:11:09.070 cpu : usr=2.39%, sys=7.08%, ctx=175, majf=0, minf=19 00:11:09.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:09.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.070 issued rwts: total=2560,3009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.070 job1: (groupid=0, jobs=1): err= 0: pid=75358: Sun Nov 17 18:20:06 2024 00:11:09.070 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:11:09.070 slat (usec): min=6, max=2943, avg=81.63, stdev=354.58 00:11:09.070 clat (usec): min=7993, max=14802, avg=10818.76, stdev=1090.15 00:11:09.070 lat (usec): min=8140, max=16154, avg=10900.39, stdev=1103.00 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[ 8717], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:11:09.070 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:11:09.070 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649], 00:11:09.070 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13698], 99.95th=[13960], 00:11:09.070 | 99.99th=[14746] 00:11:09.070 write: IOPS=5938, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1004msec); 0 zone resets 00:11:09.070 slat (usec): min=11, max=5551, avg=83.32, stdev=369.76 00:11:09.070 clat (usec): min=3000, max=16431, avg=11056.77, stdev=1076.48 00:11:09.070 lat (usec): min=3706, max=16452, avg=11140.09, stdev=1130.21 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[ 8225], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10552], 00:11:09.070 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:11:09.070 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12256], 95.00th=[13173], 00:11:09.070 | 99.00th=[14353], 99.50th=[15139], 99.90th=[15270], 99.95th=[16057], 00:11:09.070 | 99.99th=[16450] 00:11:09.070 bw ( KiB/s): min=22096, max=24625, per=34.32%, avg=23360.50, stdev=1788.27, samples=2 00:11:09.070 iops : min= 5524, max= 6156, avg=5840.00, stdev=446.89, samples=2 00:11:09.070 lat (msec) : 4=0.08%, 10=15.90%, 20=84.02% 00:11:09.070 cpu : usr=4.99%, sys=15.55%, ctx=519, majf=0, minf=11 00:11:09.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:09.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.070 issued rwts: total=5632,5962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.070 job2: (groupid=0, jobs=1): err= 0: pid=75359: Sun Nov 17 18:20:06 2024 00:11:09.070 read: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1002msec) 00:11:09.070 slat (usec): min=7, max=3042, avg=94.73, stdev=445.97 00:11:09.070 clat (usec): min=263, max=13955, avg=12548.51, stdev=1139.14 00:11:09.070 lat (usec): min=2948, max=13987, avg=12643.25, stdev=1048.14 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[ 6587], 5.00th=[11207], 10.00th=[11994], 20.00th=[12256], 00:11:09.070 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:11:09.070 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:11:09.070 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:11:09.070 | 99.99th=[13960] 00:11:09.070 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:09.070 slat (usec): min=11, max=3796, avg=97.20, stdev=415.38 00:11:09.070 clat (usec): min=9585, max=13871, avg=12704.61, stdev=595.46 00:11:09.070 lat (usec): min=11038, max=13898, avg=12801.81, stdev=431.12 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[10159], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:11:09.070 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:11:09.070 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:11:09.070 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13829], 99.95th=[13829], 00:11:09.070 | 99.99th=[13829] 00:11:09.070 bw ( KiB/s): min=20480, max=20521, per=30.12%, avg=20500.50, stdev=28.99, samples=2 00:11:09.070 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:11:09.070 lat (usec) : 500=0.01% 00:11:09.070 lat (msec) : 4=0.32%, 10=1.35%, 20=98.32% 00:11:09.070 cpu : usr=4.80%, sys=13.59%, ctx=315, majf=0, minf=14 00:11:09.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:09.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.070 issued rwts: total=4897,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.070 job3: (groupid=0, jobs=1): err= 0: pid=75360: Sun Nov 17 18:20:06 2024 00:11:09.070 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:09.070 slat (usec): min=8, max=5688, avg=172.61, stdev=881.73 00:11:09.070 clat (usec): min=16714, max=24495, avg=22668.05, stdev=1037.37 00:11:09.070 lat (usec): min=21539, max=24507, avg=22840.66, stdev=538.68 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[17433], 5.00th=[21627], 10.00th=[22152], 20.00th=[22414], 00:11:09.070 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22676], 60.00th=[22938], 00:11:09.070 | 70.00th=[22938], 80.00th=[23200], 90.00th=[23462], 95.00th=[23725], 00:11:09.070 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:11:09.070 | 99.99th=[24511] 00:11:09.070 write: IOPS=2994, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1005msec); 0 zone resets 00:11:09.070 slat (usec): min=10, max=5711, avg=180.00, stdev=871.25 00:11:09.070 clat (usec): min=99, max=24861, avg=22726.13, stdev=2493.26 00:11:09.070 lat (usec): min=4982, max=24898, avg=22906.13, stdev=2337.82 00:11:09.070 clat percentiles (usec): 00:11:09.070 | 1.00th=[ 5669], 5.00th=[18482], 10.00th=[22414], 20.00th=[22676], 00:11:09.070 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:11:09.070 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:11:09.070 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:11:09.070 | 99.99th=[24773] 00:11:09.070 bw ( KiB/s): min=10760, max=12288, per=16.93%, avg=11524.00, stdev=1080.46, samples=2 00:11:09.070 iops : min= 2690, max= 3072, avg=2881.00, stdev=270.11, samples=2 00:11:09.070 lat (usec) : 100=0.02% 00:11:09.070 lat (msec) : 10=0.57%, 20=4.22%, 50=95.19% 00:11:09.070 cpu : usr=2.49%, sys=8.96%, ctx=175, majf=0, minf=9 00:11:09.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:09.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.070 issued rwts: total=2560,3009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.070 00:11:09.070 Run status group 0 (all jobs): 00:11:09.070 READ: bw=60.8MiB/s (63.8MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.1MiB (64.1MB), run=1002-1005msec 00:11:09.070 WRITE: bw=66.5MiB/s (69.7MB/s), 11.7MiB/s-23.2MiB/s (12.3MB/s-24.3MB/s), io=66.8MiB (70.0MB), run=1002-1005msec 00:11:09.070 00:11:09.070 Disk stats (read/write): 00:11:09.070 nvme0n1: ios=2258/2560, merge=0/0, ticks=10589/11710, in_queue=22299, util=88.06% 00:11:09.070 nvme0n2: ios=4909/5120, merge=0/0, ticks=16225/15778, in_queue=32003, util=88.88% 00:11:09.070 nvme0n3: ios=4096/4576, merge=0/0, ticks=11562/12327, in_queue=23889, util=89.18% 00:11:09.070 nvme0n4: ios=2208/2560, merge=0/0, ticks=11836/13835, in_queue=25671, util=89.74% 00:11:09.070 18:20:06 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:09.070 [global] 00:11:09.070 thread=1 00:11:09.070 invalidate=1 00:11:09.070 rw=randwrite 00:11:09.070 time_based=1 00:11:09.070 runtime=1 00:11:09.070 ioengine=libaio 00:11:09.070 direct=1 00:11:09.070 bs=4096 00:11:09.070 iodepth=128 00:11:09.070 norandommap=0 00:11:09.070 numjobs=1 00:11:09.070 00:11:09.070 verify_dump=1 00:11:09.070 verify_backlog=512 00:11:09.070 verify_state_save=0 00:11:09.070 do_verify=1 00:11:09.070 verify=crc32c-intel 00:11:09.070 [job0] 00:11:09.070 filename=/dev/nvme0n1 00:11:09.070 [job1] 00:11:09.070 filename=/dev/nvme0n2 00:11:09.070 [job2] 00:11:09.070 filename=/dev/nvme0n3 00:11:09.070 [job3] 00:11:09.070 filename=/dev/nvme0n4 00:11:09.070 Could not set queue depth (nvme0n1) 00:11:09.070 Could not set queue depth (nvme0n2) 00:11:09.071 Could not set queue depth (nvme0n3) 00:11:09.071 Could not set queue depth (nvme0n4) 00:11:09.071 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.071 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.071 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.071 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.071 fio-3.35 00:11:09.071 Starting 4 threads 00:11:10.447 00:11:10.447 job0: (groupid=0, jobs=1): err= 0: pid=75413: Sun Nov 17 18:20:08 2024 00:11:10.447 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:10.447 slat (usec): min=8, max=3294, avg=85.25, stdev=354.15 00:11:10.447 clat (usec): min=3520, max=14939, avg=11129.39, stdev=1074.87 00:11:10.447 lat (usec): min=5737, max=14952, avg=11214.65, stdev=1092.25 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10290], 00:11:10.447 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:11:10.447 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12518], 95.00th=[12780], 00:11:10.447 | 99.00th=[13435], 99.50th=[13698], 99.90th=[13960], 99.95th=[14746], 00:11:10.447 | 99.99th=[14877] 00:11:10.447 write: IOPS=5662, BW=22.1MiB/s (23.2MB/s)(22.1MiB/1001msec); 0 zone resets 00:11:10.447 slat (usec): min=10, max=3503, avg=84.22, stdev=384.09 00:11:10.447 clat (usec): min=132, max=15337, avg=11248.04, stdev=1050.10 00:11:10.447 lat (usec): min=2431, max=15405, avg=11332.27, stdev=1109.75 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[ 8455], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:11:10.447 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:11:10.447 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12780], 00:11:10.447 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14746], 99.95th=[14877], 00:11:10.447 | 99.99th=[15401] 00:11:10.447 bw ( KiB/s): min=23871, max=23871, per=35.14%, avg=23871.00, stdev= 0.00, samples=1 00:11:10.447 iops : min= 5967, max= 5967, avg=5967.00, stdev= 0.00, samples=1 00:11:10.447 lat (usec) : 250=0.01% 00:11:10.447 lat (msec) : 4=0.32%, 10=8.07%, 20=91.60% 00:11:10.447 cpu : usr=5.40%, sys=14.60%, ctx=466, majf=0, minf=1 00:11:10.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:10.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.447 issued rwts: total=5632,5668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.447 job1: (groupid=0, jobs=1): err= 0: pid=75414: Sun Nov 17 18:20:08 2024 00:11:10.447 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:11:10.447 slat (usec): min=7, max=4659, avg=85.35, stdev=427.38 00:11:10.447 clat (usec): min=6929, max=15987, avg=11104.47, stdev=1118.28 00:11:10.447 lat (usec): min=6949, max=16937, avg=11189.82, stdev=1159.92 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[ 7898], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:11:10.447 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:11:10.447 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12911], 00:11:10.447 | 99.00th=[14746], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:11:10.447 | 99.99th=[15926] 00:11:10.447 write: IOPS=5804, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1004msec); 0 zone resets 00:11:10.447 slat (usec): min=11, max=4800, avg=82.27, stdev=429.39 00:11:10.447 clat (usec): min=251, max=16632, avg=11055.59, stdev=1311.57 00:11:10.447 lat (usec): min=4444, max=16655, avg=11137.87, stdev=1370.40 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10421], 00:11:10.447 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:10.447 | 70.00th=[11338], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:11:10.447 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16450], 99.95th=[16581], 00:11:10.447 | 99.99th=[16581] 00:11:10.447 bw ( KiB/s): min=21429, max=24176, per=33.57%, avg=22802.50, stdev=1942.42, samples=2 00:11:10.447 iops : min= 5357, max= 6044, avg=5700.50, stdev=485.78, samples=2 00:11:10.447 lat (usec) : 500=0.01% 00:11:10.447 lat (msec) : 10=9.97%, 20=90.02% 00:11:10.447 cpu : usr=4.19%, sys=16.25%, ctx=438, majf=0, minf=3 00:11:10.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:10.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.447 issued rwts: total=5632,5828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.447 job2: (groupid=0, jobs=1): err= 0: pid=75415: Sun Nov 17 18:20:08 2024 00:11:10.447 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:11:10.447 slat (usec): min=3, max=6721, avg=209.55, stdev=901.72 00:11:10.447 clat (usec): min=16718, max=37461, avg=26062.48, stdev=4517.87 00:11:10.447 lat (usec): min=16738, max=37477, avg=26272.04, stdev=4594.20 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[16909], 5.00th=[20841], 10.00th=[21103], 20.00th=[21627], 00:11:10.447 | 30.00th=[21890], 40.00th=[24511], 50.00th=[25560], 60.00th=[27657], 00:11:10.447 | 70.00th=[29754], 80.00th=[30802], 90.00th=[31589], 95.00th=[32637], 00:11:10.447 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:11:10.447 | 99.99th=[37487] 00:11:10.447 write: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(9.82MiB/1006msec); 0 zone resets 00:11:10.447 slat (usec): min=10, max=7255, avg=220.80, stdev=873.48 00:11:10.447 clat (usec): min=180, max=63787, avg=29270.23, stdev=13286.80 00:11:10.447 lat (usec): min=7435, max=63825, avg=29491.03, stdev=13377.99 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[ 8029], 5.00th=[15139], 10.00th=[15533], 20.00th=[18482], 00:11:10.447 | 30.00th=[20841], 40.00th=[21365], 50.00th=[22152], 60.00th=[29492], 00:11:10.447 | 70.00th=[36963], 80.00th=[39584], 90.00th=[50070], 95.00th=[56886], 00:11:10.447 | 99.00th=[63177], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:11:10.447 | 99.99th=[63701] 00:11:10.447 bw ( KiB/s): min= 8582, max=10496, per=14.04%, avg=9539.00, stdev=1353.40, samples=2 00:11:10.447 iops : min= 2145, max= 2624, avg=2384.50, stdev=338.70, samples=2 00:11:10.447 lat (usec) : 250=0.02% 00:11:10.447 lat (msec) : 10=0.92%, 20=14.62%, 50=78.65%, 100=5.79% 00:11:10.447 cpu : usr=2.39%, sys=7.56%, ctx=261, majf=0, minf=3 00:11:10.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:10.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.447 issued rwts: total=2048,2515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.447 job3: (groupid=0, jobs=1): err= 0: pid=75416: Sun Nov 17 18:20:08 2024 00:11:10.447 read: IOPS=2614, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1004msec) 00:11:10.447 slat (usec): min=6, max=11038, avg=185.03, stdev=990.48 00:11:10.447 clat (usec): min=158, max=45606, avg=23516.48, stdev=6929.92 00:11:10.447 lat (usec): min=6950, max=45633, avg=23701.51, stdev=6903.98 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[ 7373], 5.00th=[16909], 10.00th=[18220], 20.00th=[18744], 00:11:10.447 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20055], 60.00th=[22938], 00:11:10.447 | 70.00th=[26870], 80.00th=[28443], 90.00th=[29492], 95.00th=[40109], 00:11:10.447 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:11:10.447 | 99.99th=[45351] 00:11:10.447 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:11:10.447 slat (usec): min=13, max=11042, avg=160.52, stdev=791.36 00:11:10.447 clat (usec): min=11940, max=32827, avg=20841.54, stdev=4691.95 00:11:10.447 lat (usec): min=14970, max=32843, avg=21002.07, stdev=4660.78 00:11:10.447 clat percentiles (usec): 00:11:10.447 | 1.00th=[13566], 5.00th=[15401], 10.00th=[15664], 20.00th=[16188], 00:11:10.447 | 30.00th=[16712], 40.00th=[19792], 50.00th=[20317], 60.00th=[20841], 00:11:10.447 | 70.00th=[21627], 80.00th=[25560], 90.00th=[28181], 95.00th=[28967], 00:11:10.447 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:11:10.447 | 99.99th=[32900] 00:11:10.447 bw ( KiB/s): min=11760, max=12312, per=17.72%, avg=12036.00, stdev=390.32, samples=2 00:11:10.447 iops : min= 2940, max= 3078, avg=3009.00, stdev=97.58, samples=2 00:11:10.447 lat (usec) : 250=0.02% 00:11:10.447 lat (msec) : 10=0.56%, 20=44.67%, 50=54.75% 00:11:10.447 cpu : usr=2.29%, sys=9.57%, ctx=179, majf=0, minf=8 00:11:10.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:10.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.448 issued rwts: total=2625,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.448 00:11:10.448 Run status group 0 (all jobs): 00:11:10.448 READ: bw=61.9MiB/s (64.9MB/s), 8143KiB/s-22.0MiB/s (8339kB/s-23.0MB/s), io=62.3MiB (65.3MB), run=1001-1006msec 00:11:10.448 WRITE: bw=66.3MiB/s (69.6MB/s), 9.77MiB/s-22.7MiB/s (10.2MB/s-23.8MB/s), io=66.7MiB (70.0MB), run=1001-1006msec 00:11:10.448 00:11:10.448 Disk stats (read/write): 00:11:10.448 nvme0n1: ios=4658/5028, merge=0/0, ticks=16135/15787, in_queue=31922, util=87.46% 00:11:10.448 nvme0n2: ios=4647/5090, merge=0/0, ticks=23988/24172, in_queue=48160, util=87.93% 00:11:10.448 nvme0n3: ios=1661/2048, merge=0/0, ticks=14491/19723, in_queue=34214, util=89.26% 00:11:10.448 nvme0n4: ios=2400/2560, merge=0/0, ticks=13312/11370, in_queue=24682, util=89.62% 00:11:10.448 18:20:08 -- target/fio.sh@55 -- # sync 00:11:10.448 18:20:08 -- target/fio.sh@59 -- # fio_pid=75435 00:11:10.448 18:20:08 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:10.448 18:20:08 -- target/fio.sh@61 -- # sleep 3 00:11:10.448 [global] 00:11:10.448 thread=1 00:11:10.448 invalidate=1 00:11:10.448 rw=read 00:11:10.448 time_based=1 00:11:10.448 runtime=10 00:11:10.448 ioengine=libaio 00:11:10.448 direct=1 00:11:10.448 bs=4096 00:11:10.448 iodepth=1 00:11:10.448 norandommap=1 00:11:10.448 numjobs=1 00:11:10.448 00:11:10.448 [job0] 00:11:10.448 filename=/dev/nvme0n1 00:11:10.448 [job1] 00:11:10.448 filename=/dev/nvme0n2 00:11:10.448 [job2] 00:11:10.448 filename=/dev/nvme0n3 00:11:10.448 [job3] 00:11:10.448 filename=/dev/nvme0n4 00:11:10.448 Could not set queue depth (nvme0n1) 00:11:10.448 Could not set queue depth (nvme0n2) 00:11:10.448 Could not set queue depth (nvme0n3) 00:11:10.448 Could not set queue depth (nvme0n4) 00:11:10.448 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.448 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.448 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.448 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.448 fio-3.35 00:11:10.448 Starting 4 threads 00:11:13.743 18:20:11 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:13.743 fio: pid=75478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.743 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=50282496, buflen=4096 00:11:13.743 18:20:11 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:13.743 fio: pid=75477, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.743 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48537600, buflen=4096 00:11:13.743 18:20:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.743 18:20:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:14.001 fio: pid=75475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.001 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58499072, buflen=4096 00:11:14.001 18:20:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.001 18:20:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.260 fio: pid=75476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.260 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58945536, buflen=4096 00:11:14.260 00:11:14.260 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75475: Sun Nov 17 18:20:12 2024 00:11:14.260 read: IOPS=4106, BW=16.0MiB/s (16.8MB/s)(55.8MiB/3478msec) 00:11:14.260 slat (usec): min=7, max=13715, avg=16.51, stdev=170.18 00:11:14.260 clat (usec): min=121, max=1739, avg=225.46, stdev=48.47 00:11:14.260 lat (usec): min=134, max=14014, avg=241.97, stdev=177.79 00:11:14.260 clat percentiles (usec): 00:11:14.260 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 192], 00:11:14.260 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:11:14.260 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 310], 00:11:14.260 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 416], 00:11:14.260 | 99.99th=[ 1172] 00:11:14.261 bw ( KiB/s): min=13876, max=19576, per=28.84%, avg=16292.67, stdev=1838.40, samples=6 00:11:14.261 iops : min= 3469, max= 4894, avg=4073.17, stdev=459.60, samples=6 00:11:14.261 lat (usec) : 250=75.31%, 500=24.66%, 750=0.01% 00:11:14.261 lat (msec) : 2=0.02% 00:11:14.261 cpu : usr=1.24%, sys=5.52%, ctx=14290, majf=0, minf=1 00:11:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 issued rwts: total=14283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.261 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75476: Sun Nov 17 18:20:12 2024 00:11:14.261 read: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(56.2MiB/3738msec) 00:11:14.261 slat (usec): min=10, max=11623, avg=18.22, stdev=167.23 00:11:14.261 clat (usec): min=116, max=14014, avg=239.94, stdev=138.79 00:11:14.261 lat (usec): min=128, max=14031, avg=258.16, stdev=218.20 00:11:14.261 clat percentiles (usec): 00:11:14.261 | 1.00th=[ 128], 5.00th=[ 141], 10.00th=[ 155], 20.00th=[ 212], 00:11:14.261 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 251], 00:11:14.261 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:11:14.261 | 99.00th=[ 379], 99.50th=[ 416], 99.90th=[ 930], 99.95th=[ 1483], 00:11:14.261 | 99.99th=[ 4080] 00:11:14.261 bw ( KiB/s): min=13232, max=16877, per=26.68%, avg=15075.00, stdev=1058.34, samples=7 00:11:14.261 iops : min= 3308, max= 4219, avg=3768.71, stdev=264.52, samples=7 00:11:14.261 lat (usec) : 250=59.99%, 500=39.71%, 750=0.16%, 1000=0.04% 00:11:14.261 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01%, 20=0.01% 00:11:14.261 cpu : usr=1.34%, sys=5.11%, ctx=14403, majf=0, minf=2 00:11:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 issued rwts: total=14392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.261 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75477: Sun Nov 17 18:20:12 2024 00:11:14.261 read: IOPS=3681, BW=14.4MiB/s (15.1MB/s)(46.3MiB/3219msec) 00:11:14.261 slat (usec): min=7, max=8600, avg=15.79, stdev=103.42 00:11:14.261 clat (usec): min=146, max=2534, avg=254.50, stdev=44.28 00:11:14.261 lat (usec): min=162, max=8960, avg=270.30, stdev=113.94 00:11:14.261 clat percentiles (usec): 00:11:14.261 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:11:14.261 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:11:14.261 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 318], 00:11:14.261 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 537], 00:11:14.261 | 99.99th=[ 2409] 00:11:14.261 bw ( KiB/s): min=13662, max=15312, per=26.35%, avg=14890.33, stdev=614.87, samples=6 00:11:14.261 iops : min= 3415, max= 3828, avg=3722.50, stdev=153.92, samples=6 00:11:14.261 lat (usec) : 250=51.62%, 500=48.33%, 750=0.03% 00:11:14.261 lat (msec) : 2=0.01%, 4=0.02% 00:11:14.261 cpu : usr=1.37%, sys=4.29%, ctx=11855, majf=0, minf=1 00:11:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 issued rwts: total=11851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.261 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75478: Sun Nov 17 18:20:12 2024 00:11:14.261 read: IOPS=4175, BW=16.3MiB/s (17.1MB/s)(48.0MiB/2940msec) 00:11:14.261 slat (usec): min=7, max=120, avg=11.22, stdev= 4.66 00:11:14.261 clat (usec): min=131, max=7770, avg=227.13, stdev=84.08 00:11:14.261 lat (usec): min=143, max=7785, avg=238.35, stdev=83.83 00:11:14.261 clat percentiles (usec): 00:11:14.261 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 204], 00:11:14.261 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:11:14.261 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:11:14.261 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 461], 99.95th=[ 1012], 00:11:14.261 | 99.99th=[ 2802] 00:11:14.261 bw ( KiB/s): min=15816, max=17800, per=29.06%, avg=16420.80, stdev=785.50, samples=5 00:11:14.261 iops : min= 3954, max= 4450, avg=4105.20, stdev=196.38, samples=5 00:11:14.261 lat (usec) : 250=76.76%, 500=23.14%, 750=0.03% 00:11:14.261 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:11:14.261 cpu : usr=1.09%, sys=4.25%, ctx=12277, majf=0, minf=1 00:11:14.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.261 issued rwts: total=12277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.261 00:11:14.261 Run status group 0 (all jobs): 00:11:14.261 READ: bw=55.2MiB/s (57.9MB/s), 14.4MiB/s-16.3MiB/s (15.1MB/s-17.1MB/s), io=206MiB (216MB), run=2940-3738msec 00:11:14.261 00:11:14.261 Disk stats (read/write): 00:11:14.261 nvme0n1: ios=13652/0, merge=0/0, ticks=3126/0, in_queue=3126, util=95.36% 00:11:14.261 nvme0n2: ios=13669/0, merge=0/0, ticks=3342/0, in_queue=3342, util=95.66% 00:11:14.261 nvme0n3: ios=11533/0, merge=0/0, ticks=2934/0, in_queue=2934, util=96.43% 00:11:14.261 nvme0n4: ios=11875/0, merge=0/0, ticks=2567/0, in_queue=2567, util=96.63% 00:11:14.261 18:20:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.261 18:20:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.520 18:20:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.520 18:20:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:14.779 18:20:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.779 18:20:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.037 18:20:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.037 18:20:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.296 18:20:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.296 18:20:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.555 18:20:13 -- target/fio.sh@69 -- # fio_status=0 00:11:15.555 18:20:13 -- target/fio.sh@70 -- # wait 75435 00:11:15.555 18:20:13 -- target/fio.sh@70 -- # fio_status=4 00:11:15.555 18:20:13 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.555 18:20:13 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.555 18:20:13 -- common/autotest_common.sh@1208 -- # local i=0 00:11:15.555 18:20:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.555 18:20:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:15.555 18:20:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:15.555 18:20:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.555 nvmf hotplug test: fio failed as expected 00:11:15.555 18:20:13 -- common/autotest_common.sh@1220 -- # return 0 00:11:15.555 18:20:13 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:15.555 18:20:13 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:15.555 18:20:13 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.814 18:20:13 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:15.814 18:20:13 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:15.814 18:20:13 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:15.814 18:20:13 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:15.814 18:20:13 -- target/fio.sh@91 -- # nvmftestfini 00:11:15.814 18:20:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:15.814 18:20:13 -- nvmf/common.sh@116 -- # sync 00:11:15.814 18:20:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:15.814 18:20:13 -- nvmf/common.sh@119 -- # set +e 00:11:15.814 18:20:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:15.814 18:20:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:15.814 rmmod nvme_tcp 00:11:15.814 rmmod nvme_fabrics 00:11:15.814 rmmod nvme_keyring 00:11:15.814 18:20:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:15.814 18:20:14 -- nvmf/common.sh@123 -- # set -e 00:11:15.814 18:20:14 -- nvmf/common.sh@124 -- # return 0 00:11:15.814 18:20:14 -- nvmf/common.sh@477 -- # '[' -n 75048 ']' 00:11:15.814 18:20:14 -- nvmf/common.sh@478 -- # killprocess 75048 00:11:15.814 18:20:14 -- common/autotest_common.sh@936 -- # '[' -z 75048 ']' 00:11:15.814 18:20:14 -- common/autotest_common.sh@940 -- # kill -0 75048 00:11:15.814 18:20:14 -- common/autotest_common.sh@941 -- # uname 00:11:15.814 18:20:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.814 18:20:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75048 00:11:16.074 killing process with pid 75048 00:11:16.074 18:20:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:16.074 18:20:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:16.074 18:20:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75048' 00:11:16.074 18:20:14 -- common/autotest_common.sh@955 -- # kill 75048 00:11:16.074 18:20:14 -- common/autotest_common.sh@960 -- # wait 75048 00:11:16.074 18:20:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:16.074 18:20:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:16.074 18:20:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:16.074 18:20:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.074 18:20:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:16.074 18:20:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.074 18:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.074 18:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.074 18:20:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:16.074 00:11:16.074 real 0m19.251s 00:11:16.074 user 1m12.471s 00:11:16.074 sys 0m10.426s 00:11:16.074 18:20:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.074 18:20:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.074 ************************************ 00:11:16.074 END TEST nvmf_fio_target 00:11:16.074 ************************************ 00:11:16.074 18:20:14 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:16.074 18:20:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:16.074 18:20:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.074 18:20:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.074 ************************************ 00:11:16.074 START TEST nvmf_bdevio 00:11:16.074 ************************************ 00:11:16.074 18:20:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:16.333 * Looking for test storage... 00:11:16.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.333 18:20:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:16.333 18:20:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:16.333 18:20:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:16.333 18:20:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:16.333 18:20:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:16.333 18:20:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:16.333 18:20:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:16.333 18:20:14 -- scripts/common.sh@335 -- # IFS=.-: 00:11:16.333 18:20:14 -- scripts/common.sh@335 -- # read -ra ver1 00:11:16.333 18:20:14 -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.333 18:20:14 -- scripts/common.sh@336 -- # read -ra ver2 00:11:16.333 18:20:14 -- scripts/common.sh@337 -- # local 'op=<' 00:11:16.333 18:20:14 -- scripts/common.sh@339 -- # ver1_l=2 00:11:16.333 18:20:14 -- scripts/common.sh@340 -- # ver2_l=1 00:11:16.333 18:20:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:16.333 18:20:14 -- scripts/common.sh@343 -- # case "$op" in 00:11:16.333 18:20:14 -- scripts/common.sh@344 -- # : 1 00:11:16.333 18:20:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:16.333 18:20:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.333 18:20:14 -- scripts/common.sh@364 -- # decimal 1 00:11:16.333 18:20:14 -- scripts/common.sh@352 -- # local d=1 00:11:16.333 18:20:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.333 18:20:14 -- scripts/common.sh@354 -- # echo 1 00:11:16.333 18:20:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:16.333 18:20:14 -- scripts/common.sh@365 -- # decimal 2 00:11:16.333 18:20:14 -- scripts/common.sh@352 -- # local d=2 00:11:16.333 18:20:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.333 18:20:14 -- scripts/common.sh@354 -- # echo 2 00:11:16.333 18:20:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:16.333 18:20:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:16.333 18:20:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:16.333 18:20:14 -- scripts/common.sh@367 -- # return 0 00:11:16.333 18:20:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.333 18:20:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:16.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.333 --rc genhtml_branch_coverage=1 00:11:16.333 --rc genhtml_function_coverage=1 00:11:16.333 --rc genhtml_legend=1 00:11:16.333 --rc geninfo_all_blocks=1 00:11:16.333 --rc geninfo_unexecuted_blocks=1 00:11:16.333 00:11:16.333 ' 00:11:16.333 18:20:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:16.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.333 --rc genhtml_branch_coverage=1 00:11:16.333 --rc genhtml_function_coverage=1 00:11:16.333 --rc genhtml_legend=1 00:11:16.333 --rc geninfo_all_blocks=1 00:11:16.333 --rc geninfo_unexecuted_blocks=1 00:11:16.333 00:11:16.333 ' 00:11:16.333 18:20:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.334 --rc genhtml_branch_coverage=1 00:11:16.334 --rc genhtml_function_coverage=1 00:11:16.334 --rc genhtml_legend=1 00:11:16.334 --rc geninfo_all_blocks=1 00:11:16.334 --rc geninfo_unexecuted_blocks=1 00:11:16.334 00:11:16.334 ' 00:11:16.334 18:20:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.334 --rc genhtml_branch_coverage=1 00:11:16.334 --rc genhtml_function_coverage=1 00:11:16.334 --rc genhtml_legend=1 00:11:16.334 --rc geninfo_all_blocks=1 00:11:16.334 --rc geninfo_unexecuted_blocks=1 00:11:16.334 00:11:16.334 ' 00:11:16.334 18:20:14 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.334 18:20:14 -- nvmf/common.sh@7 -- # uname -s 00:11:16.334 18:20:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.334 18:20:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.334 18:20:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.334 18:20:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.334 18:20:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.334 18:20:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.334 18:20:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.334 18:20:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.334 18:20:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.334 18:20:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.334 18:20:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:11:16.334 18:20:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:11:16.334 18:20:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.334 18:20:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.334 18:20:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.334 18:20:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.334 18:20:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.334 18:20:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.334 18:20:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.334 18:20:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.334 18:20:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.334 18:20:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.334 18:20:14 -- paths/export.sh@5 -- # export PATH 00:11:16.334 18:20:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.334 18:20:14 -- nvmf/common.sh@46 -- # : 0 00:11:16.334 18:20:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:16.334 18:20:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:16.334 18:20:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:16.334 18:20:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.334 18:20:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.334 18:20:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:16.334 18:20:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:16.334 18:20:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:16.334 18:20:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.334 18:20:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.334 18:20:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:16.334 18:20:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:16.334 18:20:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.334 18:20:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:16.334 18:20:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:16.334 18:20:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:16.334 18:20:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.334 18:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.334 18:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.334 18:20:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:16.334 18:20:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:16.334 18:20:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:16.334 18:20:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:16.334 18:20:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:16.334 18:20:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:16.334 18:20:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.334 18:20:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.334 18:20:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:16.334 18:20:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:16.334 18:20:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.334 18:20:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.334 18:20:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.334 18:20:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.334 18:20:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.334 18:20:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.334 18:20:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.334 18:20:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.334 18:20:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:16.334 18:20:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:16.334 Cannot find device "nvmf_tgt_br" 00:11:16.334 18:20:14 -- nvmf/common.sh@154 -- # true 00:11:16.334 18:20:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.334 Cannot find device "nvmf_tgt_br2" 00:11:16.334 18:20:14 -- nvmf/common.sh@155 -- # true 00:11:16.334 18:20:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:16.334 18:20:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:16.334 Cannot find device "nvmf_tgt_br" 00:11:16.334 18:20:14 -- nvmf/common.sh@157 -- # true 00:11:16.334 18:20:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:16.334 Cannot find device "nvmf_tgt_br2" 00:11:16.334 18:20:14 -- nvmf/common.sh@158 -- # true 00:11:16.334 18:20:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:16.593 18:20:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:16.593 18:20:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.593 18:20:14 -- nvmf/common.sh@161 -- # true 00:11:16.593 18:20:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.593 18:20:14 -- nvmf/common.sh@162 -- # true 00:11:16.593 18:20:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.593 18:20:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.593 18:20:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.593 18:20:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.593 18:20:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.593 18:20:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.593 18:20:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.593 18:20:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:16.593 18:20:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:16.593 18:20:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:16.593 18:20:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:16.593 18:20:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:16.593 18:20:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:16.593 18:20:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.593 18:20:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.593 18:20:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.593 18:20:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:16.593 18:20:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:16.593 18:20:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.593 18:20:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.593 18:20:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.593 18:20:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.593 18:20:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.593 18:20:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:16.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:16.593 00:11:16.593 --- 10.0.0.2 ping statistics --- 00:11:16.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.593 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:16.593 18:20:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:16.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:16.593 00:11:16.593 --- 10.0.0.3 ping statistics --- 00:11:16.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.593 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:16.593 18:20:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:16.593 00:11:16.593 --- 10.0.0.1 ping statistics --- 00:11:16.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.593 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:16.593 18:20:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.593 18:20:14 -- nvmf/common.sh@421 -- # return 0 00:11:16.593 18:20:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:16.593 18:20:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.593 18:20:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:16.852 18:20:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:16.852 18:20:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.852 18:20:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:16.852 18:20:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:16.852 18:20:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:16.852 18:20:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.852 18:20:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.852 18:20:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.852 18:20:14 -- nvmf/common.sh@469 -- # nvmfpid=75748 00:11:16.852 18:20:14 -- nvmf/common.sh@470 -- # waitforlisten 75748 00:11:16.852 18:20:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:16.852 18:20:14 -- common/autotest_common.sh@829 -- # '[' -z 75748 ']' 00:11:16.852 18:20:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.852 18:20:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.852 18:20:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.852 18:20:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.852 18:20:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.852 [2024-11-17 18:20:14.923322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:16.852 [2024-11-17 18:20:14.923446] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.852 [2024-11-17 18:20:15.055033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.852 [2024-11-17 18:20:15.088787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.852 [2024-11-17 18:20:15.088951] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.852 [2024-11-17 18:20:15.088964] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.852 [2024-11-17 18:20:15.088972] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.852 [2024-11-17 18:20:15.089615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:16.853 [2024-11-17 18:20:15.089764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:16.853 [2024-11-17 18:20:15.089918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:16.853 [2024-11-17 18:20:15.089918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.139 18:20:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.139 18:20:15 -- common/autotest_common.sh@862 -- # return 0 00:11:17.139 18:20:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:17.139 18:20:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.139 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 18:20:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.139 18:20:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.139 18:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.139 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 [2024-11-17 18:20:15.222034] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.139 18:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.139 18:20:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.139 18:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.139 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 Malloc0 00:11:17.139 18:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.139 18:20:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.139 18:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.139 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 18:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.139 18:20:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.139 18:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.139 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 18:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.139 18:20:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.139 18:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.139 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.139 [2024-11-17 18:20:15.289643] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.139 18:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.139 18:20:15 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:17.139 18:20:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:17.139 18:20:15 -- nvmf/common.sh@520 -- # config=() 00:11:17.139 18:20:15 -- nvmf/common.sh@520 -- # local subsystem config 00:11:17.139 18:20:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:17.139 18:20:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:17.139 { 00:11:17.139 "params": { 00:11:17.139 "name": "Nvme$subsystem", 00:11:17.139 "trtype": "$TEST_TRANSPORT", 00:11:17.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.139 "adrfam": "ipv4", 00:11:17.139 "trsvcid": "$NVMF_PORT", 00:11:17.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.139 "hdgst": ${hdgst:-false}, 00:11:17.139 "ddgst": ${ddgst:-false} 00:11:17.139 }, 00:11:17.139 "method": "bdev_nvme_attach_controller" 00:11:17.139 } 00:11:17.139 EOF 00:11:17.139 )") 00:11:17.139 18:20:15 -- nvmf/common.sh@542 -- # cat 00:11:17.139 18:20:15 -- nvmf/common.sh@544 -- # jq . 00:11:17.139 18:20:15 -- nvmf/common.sh@545 -- # IFS=, 00:11:17.139 18:20:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:17.139 "params": { 00:11:17.139 "name": "Nvme1", 00:11:17.139 "trtype": "tcp", 00:11:17.139 "traddr": "10.0.0.2", 00:11:17.139 "adrfam": "ipv4", 00:11:17.139 "trsvcid": "4420", 00:11:17.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.139 "hdgst": false, 00:11:17.139 "ddgst": false 00:11:17.139 }, 00:11:17.139 "method": "bdev_nvme_attach_controller" 00:11:17.139 }' 00:11:17.139 [2024-11-17 18:20:15.343705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:17.139 [2024-11-17 18:20:15.343806] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75771 ] 00:11:17.439 [2024-11-17 18:20:15.483730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.439 [2024-11-17 18:20:15.519540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.439 [2024-11-17 18:20:15.519677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.439 [2024-11-17 18:20:15.519687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.439 [2024-11-17 18:20:15.646938] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:17.439 [2024-11-17 18:20:15.646999] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:17.439 I/O targets: 00:11:17.439 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:17.439 00:11:17.439 00:11:17.439 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.439 http://cunit.sourceforge.net/ 00:11:17.439 00:11:17.439 00:11:17.439 Suite: bdevio tests on: Nvme1n1 00:11:17.439 Test: blockdev write read block ...passed 00:11:17.439 Test: blockdev write zeroes read block ...passed 00:11:17.439 Test: blockdev write zeroes read no split ...passed 00:11:17.439 Test: blockdev write zeroes read split ...passed 00:11:17.439 Test: blockdev write zeroes read split partial ...passed 00:11:17.439 Test: blockdev reset ...[2024-11-17 18:20:15.676263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:17.439 [2024-11-17 18:20:15.676367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9172a0 (9): Bad file descriptor 00:11:17.698 [2024-11-17 18:20:15.696034] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:17.698 passed 00:11:17.698 Test: blockdev write read 8 blocks ...passed 00:11:17.698 Test: blockdev write read size > 128k ...passed 00:11:17.698 Test: blockdev write read invalid size ...passed 00:11:17.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:17.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:17.698 Test: blockdev write read max offset ...passed 00:11:17.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:17.698 Test: blockdev writev readv 8 blocks ...passed 00:11:17.698 Test: blockdev writev readv 30 x 1block ...passed 00:11:17.698 Test: blockdev writev readv block ...passed 00:11:17.698 Test: blockdev writev readv size > 128k ...passed 00:11:17.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:17.698 Test: blockdev comparev and writev ...[2024-11-17 18:20:15.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.703751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.703774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.703786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.704347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.704387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.704405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.704416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.704851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.704879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.704897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.704908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.705437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.705466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.705485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.698 [2024-11-17 18:20:15.705495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:17.698 passed 00:11:17.698 Test: blockdev nvme passthru rw ...passed 00:11:17.698 Test: blockdev nvme passthru vendor specific ...[2024-11-17 18:20:15.706375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.698 [2024-11-17 18:20:15.706399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.706610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.698 [2024-11-17 18:20:15.706638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.706830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.698 [2024-11-17 18:20:15.706856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:17.698 [2024-11-17 18:20:15.707052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.698 [2024-11-17 18:20:15.707079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:17.698 passed 00:11:17.698 Test: blockdev nvme admin passthru ...passed 00:11:17.698 Test: blockdev copy ...passed 00:11:17.698 00:11:17.698 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.698 suites 1 1 n/a 0 0 00:11:17.698 tests 23 23 23 0 0 00:11:17.698 asserts 152 152 152 0 n/a 00:11:17.698 00:11:17.699 Elapsed time = 0.163 seconds 00:11:17.699 18:20:15 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.699 18:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.699 18:20:15 -- common/autotest_common.sh@10 -- # set +x 00:11:17.699 18:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.699 18:20:15 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:17.699 18:20:15 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:17.699 18:20:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:17.699 18:20:15 -- nvmf/common.sh@116 -- # sync 00:11:17.699 18:20:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:17.699 18:20:15 -- nvmf/common.sh@119 -- # set +e 00:11:17.699 18:20:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:17.699 18:20:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:17.699 rmmod nvme_tcp 00:11:17.699 rmmod nvme_fabrics 00:11:17.699 rmmod nvme_keyring 00:11:17.699 18:20:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:17.957 18:20:15 -- nvmf/common.sh@123 -- # set -e 00:11:17.957 18:20:15 -- nvmf/common.sh@124 -- # return 0 00:11:17.957 18:20:15 -- nvmf/common.sh@477 -- # '[' -n 75748 ']' 00:11:17.957 18:20:15 -- nvmf/common.sh@478 -- # killprocess 75748 00:11:17.957 18:20:15 -- common/autotest_common.sh@936 -- # '[' -z 75748 ']' 00:11:17.957 18:20:15 -- common/autotest_common.sh@940 -- # kill -0 75748 00:11:17.957 18:20:15 -- common/autotest_common.sh@941 -- # uname 00:11:17.957 18:20:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:17.957 18:20:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75748 00:11:17.957 18:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:17.957 18:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:17.957 killing process with pid 75748 00:11:17.957 18:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75748' 00:11:17.957 18:20:16 -- common/autotest_common.sh@955 -- # kill 75748 00:11:17.957 18:20:16 -- common/autotest_common.sh@960 -- # wait 75748 00:11:17.957 18:20:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:17.957 18:20:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:17.957 18:20:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:17.957 18:20:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.957 18:20:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:17.957 18:20:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.957 18:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.957 18:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.957 18:20:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:17.957 00:11:17.957 real 0m1.871s 00:11:17.957 user 0m5.222s 00:11:17.957 sys 0m0.615s 00:11:17.957 18:20:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.957 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:11:17.957 ************************************ 00:11:17.957 END TEST nvmf_bdevio 00:11:17.957 ************************************ 00:11:18.216 18:20:16 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:18.216 18:20:16 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:18.216 18:20:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:18.216 18:20:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.216 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:11:18.216 ************************************ 00:11:18.216 START TEST nvmf_bdevio_no_huge 00:11:18.216 ************************************ 00:11:18.216 18:20:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:18.216 * Looking for test storage... 00:11:18.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:18.216 18:20:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:18.216 18:20:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:18.216 18:20:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:18.216 18:20:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:18.216 18:20:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:18.216 18:20:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:18.216 18:20:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:18.216 18:20:16 -- scripts/common.sh@335 -- # IFS=.-: 00:11:18.216 18:20:16 -- scripts/common.sh@335 -- # read -ra ver1 00:11:18.216 18:20:16 -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.216 18:20:16 -- scripts/common.sh@336 -- # read -ra ver2 00:11:18.216 18:20:16 -- scripts/common.sh@337 -- # local 'op=<' 00:11:18.216 18:20:16 -- scripts/common.sh@339 -- # ver1_l=2 00:11:18.216 18:20:16 -- scripts/common.sh@340 -- # ver2_l=1 00:11:18.216 18:20:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:18.216 18:20:16 -- scripts/common.sh@343 -- # case "$op" in 00:11:18.216 18:20:16 -- scripts/common.sh@344 -- # : 1 00:11:18.216 18:20:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:18.216 18:20:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.216 18:20:16 -- scripts/common.sh@364 -- # decimal 1 00:11:18.216 18:20:16 -- scripts/common.sh@352 -- # local d=1 00:11:18.216 18:20:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.216 18:20:16 -- scripts/common.sh@354 -- # echo 1 00:11:18.216 18:20:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:18.216 18:20:16 -- scripts/common.sh@365 -- # decimal 2 00:11:18.216 18:20:16 -- scripts/common.sh@352 -- # local d=2 00:11:18.216 18:20:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.216 18:20:16 -- scripts/common.sh@354 -- # echo 2 00:11:18.217 18:20:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:18.217 18:20:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:18.217 18:20:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:18.217 18:20:16 -- scripts/common.sh@367 -- # return 0 00:11:18.217 18:20:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.217 18:20:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.217 --rc genhtml_branch_coverage=1 00:11:18.217 --rc genhtml_function_coverage=1 00:11:18.217 --rc genhtml_legend=1 00:11:18.217 --rc geninfo_all_blocks=1 00:11:18.217 --rc geninfo_unexecuted_blocks=1 00:11:18.217 00:11:18.217 ' 00:11:18.217 18:20:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.217 --rc genhtml_branch_coverage=1 00:11:18.217 --rc genhtml_function_coverage=1 00:11:18.217 --rc genhtml_legend=1 00:11:18.217 --rc geninfo_all_blocks=1 00:11:18.217 --rc geninfo_unexecuted_blocks=1 00:11:18.217 00:11:18.217 ' 00:11:18.217 18:20:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.217 --rc genhtml_branch_coverage=1 00:11:18.217 --rc genhtml_function_coverage=1 00:11:18.217 --rc genhtml_legend=1 00:11:18.217 --rc geninfo_all_blocks=1 00:11:18.217 --rc geninfo_unexecuted_blocks=1 00:11:18.217 00:11:18.217 ' 00:11:18.217 18:20:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:18.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.217 --rc genhtml_branch_coverage=1 00:11:18.217 --rc genhtml_function_coverage=1 00:11:18.217 --rc genhtml_legend=1 00:11:18.217 --rc geninfo_all_blocks=1 00:11:18.217 --rc geninfo_unexecuted_blocks=1 00:11:18.217 00:11:18.217 ' 00:11:18.217 18:20:16 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:18.217 18:20:16 -- nvmf/common.sh@7 -- # uname -s 00:11:18.217 18:20:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.217 18:20:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.217 18:20:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.217 18:20:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.217 18:20:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.217 18:20:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.217 18:20:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.217 18:20:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.217 18:20:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.217 18:20:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.217 18:20:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:11:18.217 18:20:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:11:18.217 18:20:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.217 18:20:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.217 18:20:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:18.217 18:20:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:18.217 18:20:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.217 18:20:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.217 18:20:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.217 18:20:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.217 18:20:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.217 18:20:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.217 18:20:16 -- paths/export.sh@5 -- # export PATH 00:11:18.217 18:20:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.217 18:20:16 -- nvmf/common.sh@46 -- # : 0 00:11:18.217 18:20:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:18.217 18:20:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:18.217 18:20:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:18.217 18:20:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.217 18:20:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.217 18:20:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:18.217 18:20:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:18.217 18:20:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:18.217 18:20:16 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.217 18:20:16 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.217 18:20:16 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:18.217 18:20:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:18.217 18:20:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.217 18:20:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:18.217 18:20:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:18.217 18:20:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:18.217 18:20:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.217 18:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.217 18:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.217 18:20:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:18.217 18:20:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:18.217 18:20:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:18.217 18:20:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:18.217 18:20:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:18.217 18:20:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:18.217 18:20:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.217 18:20:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.217 18:20:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:18.217 18:20:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:18.217 18:20:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:18.217 18:20:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:18.217 18:20:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:18.217 18:20:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.217 18:20:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:18.217 18:20:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:18.217 18:20:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:18.217 18:20:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:18.217 18:20:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:18.217 18:20:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:18.217 Cannot find device "nvmf_tgt_br" 00:11:18.217 18:20:16 -- nvmf/common.sh@154 -- # true 00:11:18.217 18:20:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:18.477 Cannot find device "nvmf_tgt_br2" 00:11:18.477 18:20:16 -- nvmf/common.sh@155 -- # true 00:11:18.477 18:20:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:18.477 18:20:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:18.477 Cannot find device "nvmf_tgt_br" 00:11:18.477 18:20:16 -- nvmf/common.sh@157 -- # true 00:11:18.477 18:20:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:18.477 Cannot find device "nvmf_tgt_br2" 00:11:18.477 18:20:16 -- nvmf/common.sh@158 -- # true 00:11:18.477 18:20:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:18.477 18:20:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:18.477 18:20:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.477 18:20:16 -- nvmf/common.sh@161 -- # true 00:11:18.477 18:20:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.477 18:20:16 -- nvmf/common.sh@162 -- # true 00:11:18.477 18:20:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.477 18:20:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.477 18:20:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.477 18:20:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.477 18:20:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.477 18:20:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.477 18:20:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.477 18:20:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.477 18:20:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.477 18:20:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:18.477 18:20:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:18.477 18:20:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:18.477 18:20:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:18.477 18:20:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.477 18:20:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.477 18:20:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.477 18:20:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:18.477 18:20:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:18.735 18:20:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.735 18:20:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.735 18:20:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.735 18:20:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.735 18:20:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.735 18:20:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:18.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:18.735 00:11:18.735 --- 10.0.0.2 ping statistics --- 00:11:18.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.735 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:18.735 18:20:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:18.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:11:18.735 00:11:18.735 --- 10.0.0.3 ping statistics --- 00:11:18.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.735 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:18.735 18:20:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:11:18.735 00:11:18.735 --- 10.0.0.1 ping statistics --- 00:11:18.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.735 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:18.735 18:20:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.735 18:20:16 -- nvmf/common.sh@421 -- # return 0 00:11:18.735 18:20:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:18.735 18:20:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.735 18:20:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:18.735 18:20:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:18.735 18:20:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.735 18:20:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:18.735 18:20:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:18.735 18:20:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:18.736 18:20:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:18.736 18:20:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:18.736 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:11:18.736 18:20:16 -- nvmf/common.sh@469 -- # nvmfpid=75958 00:11:18.736 18:20:16 -- nvmf/common.sh@470 -- # waitforlisten 75958 00:11:18.736 18:20:16 -- common/autotest_common.sh@829 -- # '[' -z 75958 ']' 00:11:18.736 18:20:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.736 18:20:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.736 18:20:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:18.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.736 18:20:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.736 18:20:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.736 18:20:16 -- common/autotest_common.sh@10 -- # set +x 00:11:18.736 [2024-11-17 18:20:16.883217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:18.736 [2024-11-17 18:20:16.883364] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:18.994 [2024-11-17 18:20:17.028025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.994 [2024-11-17 18:20:17.097815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:18.994 [2024-11-17 18:20:17.097971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.994 [2024-11-17 18:20:17.097983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.994 [2024-11-17 18:20:17.097991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.994 [2024-11-17 18:20:17.098142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:18.994 [2024-11-17 18:20:17.098295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:18.994 [2024-11-17 18:20:17.098965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:18.994 [2024-11-17 18:20:17.098971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.929 18:20:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.929 18:20:17 -- common/autotest_common.sh@862 -- # return 0 00:11:19.929 18:20:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:19.929 18:20:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.929 18:20:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.929 18:20:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.929 18:20:17 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.929 18:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.929 18:20:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.929 [2024-11-17 18:20:17.876026] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.929 18:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.929 18:20:17 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:19.929 18:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.929 18:20:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.929 Malloc0 00:11:19.929 18:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.929 18:20:17 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.929 18:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.929 18:20:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.929 18:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.929 18:20:17 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:19.929 18:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.929 18:20:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.929 18:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.929 18:20:17 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.929 18:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.929 18:20:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.929 [2024-11-17 18:20:17.917965] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.929 18:20:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.929 18:20:17 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:19.929 18:20:17 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:19.929 18:20:17 -- nvmf/common.sh@520 -- # config=() 00:11:19.929 18:20:17 -- nvmf/common.sh@520 -- # local subsystem config 00:11:19.929 18:20:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:19.929 18:20:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:19.929 { 00:11:19.929 "params": { 00:11:19.929 "name": "Nvme$subsystem", 00:11:19.929 "trtype": "$TEST_TRANSPORT", 00:11:19.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.929 "adrfam": "ipv4", 00:11:19.929 "trsvcid": "$NVMF_PORT", 00:11:19.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.929 "hdgst": ${hdgst:-false}, 00:11:19.929 "ddgst": ${ddgst:-false} 00:11:19.929 }, 00:11:19.929 "method": "bdev_nvme_attach_controller" 00:11:19.929 } 00:11:19.929 EOF 00:11:19.929 )") 00:11:19.929 18:20:17 -- nvmf/common.sh@542 -- # cat 00:11:19.929 18:20:17 -- nvmf/common.sh@544 -- # jq . 00:11:19.929 18:20:17 -- nvmf/common.sh@545 -- # IFS=, 00:11:19.929 18:20:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:19.929 "params": { 00:11:19.929 "name": "Nvme1", 00:11:19.929 "trtype": "tcp", 00:11:19.929 "traddr": "10.0.0.2", 00:11:19.929 "adrfam": "ipv4", 00:11:19.929 "trsvcid": "4420", 00:11:19.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.929 "hdgst": false, 00:11:19.929 "ddgst": false 00:11:19.929 }, 00:11:19.929 "method": "bdev_nvme_attach_controller" 00:11:19.929 }' 00:11:19.929 [2024-11-17 18:20:17.965735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:19.929 [2024-11-17 18:20:17.965809] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75994 ] 00:11:19.929 [2024-11-17 18:20:18.099885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.188 [2024-11-17 18:20:18.209340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.188 [2024-11-17 18:20:18.209449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.188 [2024-11-17 18:20:18.209457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.188 [2024-11-17 18:20:18.377469] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:20.188 [2024-11-17 18:20:18.377720] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:20.188 I/O targets: 00:11:20.188 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:20.188 00:11:20.188 00:11:20.188 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.188 http://cunit.sourceforge.net/ 00:11:20.188 00:11:20.188 00:11:20.188 Suite: bdevio tests on: Nvme1n1 00:11:20.188 Test: blockdev write read block ...passed 00:11:20.188 Test: blockdev write zeroes read block ...passed 00:11:20.188 Test: blockdev write zeroes read no split ...passed 00:11:20.188 Test: blockdev write zeroes read split ...passed 00:11:20.188 Test: blockdev write zeroes read split partial ...passed 00:11:20.188 Test: blockdev reset ...[2024-11-17 18:20:18.418213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:20.188 [2024-11-17 18:20:18.418349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66760 (9): Bad file descriptor 00:11:20.188 passed 00:11:20.188 Test: blockdev write read 8 blocks ...[2024-11-17 18:20:18.437204] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:20.188 passed 00:11:20.188 Test: blockdev write read size > 128k ...passed 00:11:20.188 Test: blockdev write read invalid size ...passed 00:11:20.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.189 Test: blockdev write read max offset ...passed 00:11:20.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.189 Test: blockdev writev readv 8 blocks ...passed 00:11:20.189 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.189 Test: blockdev writev readv block ...passed 00:11:20.189 Test: blockdev writev readv size > 128k ...passed 00:11:20.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.189 Test: blockdev comparev and writev ...[2024-11-17 18:20:18.445786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.445983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.446028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.446039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.446377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.446397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.446414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.446424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.446730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.446746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.446762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.446771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.447020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.447036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.447052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:20.189 [2024-11-17 18:20:18.447061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:20.189 passed 00:11:20.189 Test: blockdev nvme passthru rw ...passed 00:11:20.189 Test: blockdev nvme passthru vendor specific ...[2024-11-17 18:20:18.447997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:20.189 [2024-11-17 18:20:18.448023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.448150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:20.189 [2024-11-17 18:20:18.448167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.448272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:20.189 [2024-11-17 18:20:18.448309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:20.189 [2024-11-17 18:20:18.448429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:20.189 [2024-11-17 18:20:18.448450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:20.189 passed 00:11:20.448 Test: blockdev nvme admin passthru ...passed 00:11:20.448 Test: blockdev copy ...passed 00:11:20.448 00:11:20.448 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.448 suites 1 1 n/a 0 0 00:11:20.448 tests 23 23 23 0 0 00:11:20.448 asserts 152 152 152 0 n/a 00:11:20.448 00:11:20.448 Elapsed time = 0.183 seconds 00:11:20.706 18:20:18 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.706 18:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.706 18:20:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.706 18:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.706 18:20:18 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:20.706 18:20:18 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:20.706 18:20:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:20.706 18:20:18 -- nvmf/common.sh@116 -- # sync 00:11:20.706 18:20:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:20.706 18:20:18 -- nvmf/common.sh@119 -- # set +e 00:11:20.706 18:20:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:20.706 18:20:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:20.706 rmmod nvme_tcp 00:11:20.706 rmmod nvme_fabrics 00:11:20.706 rmmod nvme_keyring 00:11:20.706 18:20:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:20.706 18:20:18 -- nvmf/common.sh@123 -- # set -e 00:11:20.706 18:20:18 -- nvmf/common.sh@124 -- # return 0 00:11:20.706 18:20:18 -- nvmf/common.sh@477 -- # '[' -n 75958 ']' 00:11:20.706 18:20:18 -- nvmf/common.sh@478 -- # killprocess 75958 00:11:20.706 18:20:18 -- common/autotest_common.sh@936 -- # '[' -z 75958 ']' 00:11:20.706 18:20:18 -- common/autotest_common.sh@940 -- # kill -0 75958 00:11:20.706 18:20:18 -- common/autotest_common.sh@941 -- # uname 00:11:20.706 18:20:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.706 18:20:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75958 00:11:20.706 killing process with pid 75958 00:11:20.706 18:20:18 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:20.706 18:20:18 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:20.706 18:20:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75958' 00:11:20.706 18:20:18 -- common/autotest_common.sh@955 -- # kill 75958 00:11:20.706 18:20:18 -- common/autotest_common.sh@960 -- # wait 75958 00:11:20.965 18:20:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:20.965 18:20:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:20.965 18:20:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:20.965 18:20:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.965 18:20:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:20.965 18:20:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.965 18:20:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.965 18:20:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.223 18:20:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:21.223 00:11:21.223 real 0m3.005s 00:11:21.223 user 0m9.546s 00:11:21.223 sys 0m1.170s 00:11:21.223 18:20:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.223 ************************************ 00:11:21.223 END TEST nvmf_bdevio_no_huge 00:11:21.223 ************************************ 00:11:21.223 18:20:19 -- common/autotest_common.sh@10 -- # set +x 00:11:21.223 18:20:19 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:21.223 18:20:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.223 18:20:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.223 18:20:19 -- common/autotest_common.sh@10 -- # set +x 00:11:21.223 ************************************ 00:11:21.223 START TEST nvmf_tls 00:11:21.223 ************************************ 00:11:21.223 18:20:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:21.223 * Looking for test storage... 00:11:21.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.223 18:20:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:21.223 18:20:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:21.223 18:20:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:21.223 18:20:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:21.223 18:20:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:21.223 18:20:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:21.223 18:20:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:21.224 18:20:19 -- scripts/common.sh@335 -- # IFS=.-: 00:11:21.224 18:20:19 -- scripts/common.sh@335 -- # read -ra ver1 00:11:21.224 18:20:19 -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.224 18:20:19 -- scripts/common.sh@336 -- # read -ra ver2 00:11:21.224 18:20:19 -- scripts/common.sh@337 -- # local 'op=<' 00:11:21.224 18:20:19 -- scripts/common.sh@339 -- # ver1_l=2 00:11:21.224 18:20:19 -- scripts/common.sh@340 -- # ver2_l=1 00:11:21.224 18:20:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:21.224 18:20:19 -- scripts/common.sh@343 -- # case "$op" in 00:11:21.224 18:20:19 -- scripts/common.sh@344 -- # : 1 00:11:21.224 18:20:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:21.224 18:20:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.224 18:20:19 -- scripts/common.sh@364 -- # decimal 1 00:11:21.224 18:20:19 -- scripts/common.sh@352 -- # local d=1 00:11:21.224 18:20:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.224 18:20:19 -- scripts/common.sh@354 -- # echo 1 00:11:21.224 18:20:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:21.224 18:20:19 -- scripts/common.sh@365 -- # decimal 2 00:11:21.224 18:20:19 -- scripts/common.sh@352 -- # local d=2 00:11:21.224 18:20:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.224 18:20:19 -- scripts/common.sh@354 -- # echo 2 00:11:21.224 18:20:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:21.224 18:20:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:21.224 18:20:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:21.224 18:20:19 -- scripts/common.sh@367 -- # return 0 00:11:21.224 18:20:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.224 18:20:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 18:20:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 18:20:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 18:20:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 18:20:19 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.224 18:20:19 -- nvmf/common.sh@7 -- # uname -s 00:11:21.224 18:20:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.224 18:20:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.224 18:20:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.224 18:20:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.224 18:20:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.224 18:20:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.224 18:20:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.224 18:20:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.224 18:20:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.224 18:20:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.224 18:20:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:11:21.224 18:20:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:11:21.224 18:20:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.224 18:20:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.224 18:20:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.224 18:20:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.224 18:20:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.224 18:20:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.224 18:20:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.224 18:20:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.224 18:20:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.224 18:20:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.224 18:20:19 -- paths/export.sh@5 -- # export PATH 00:11:21.224 18:20:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.224 18:20:19 -- nvmf/common.sh@46 -- # : 0 00:11:21.224 18:20:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:21.224 18:20:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:21.224 18:20:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:21.224 18:20:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.224 18:20:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.224 18:20:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:21.224 18:20:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:21.224 18:20:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:21.224 18:20:19 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.224 18:20:19 -- target/tls.sh@71 -- # nvmftestinit 00:11:21.483 18:20:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:21.483 18:20:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.483 18:20:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:21.483 18:20:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:21.483 18:20:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:21.483 18:20:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.483 18:20:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.483 18:20:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.483 18:20:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:21.483 18:20:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:21.483 18:20:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:21.483 18:20:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:21.483 18:20:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:21.483 18:20:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:21.483 18:20:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.483 18:20:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.483 18:20:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:21.483 18:20:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:21.483 18:20:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.483 18:20:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.483 18:20:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.483 18:20:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.483 18:20:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.483 18:20:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.483 18:20:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.483 18:20:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.483 18:20:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:21.483 18:20:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:21.483 Cannot find device "nvmf_tgt_br" 00:11:21.483 18:20:19 -- nvmf/common.sh@154 -- # true 00:11:21.483 18:20:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.483 Cannot find device "nvmf_tgt_br2" 00:11:21.483 18:20:19 -- nvmf/common.sh@155 -- # true 00:11:21.483 18:20:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:21.483 18:20:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:21.483 Cannot find device "nvmf_tgt_br" 00:11:21.483 18:20:19 -- nvmf/common.sh@157 -- # true 00:11:21.483 18:20:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:21.483 Cannot find device "nvmf_tgt_br2" 00:11:21.483 18:20:19 -- nvmf/common.sh@158 -- # true 00:11:21.483 18:20:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:21.483 18:20:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:21.483 18:20:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.483 18:20:19 -- nvmf/common.sh@161 -- # true 00:11:21.483 18:20:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.483 18:20:19 -- nvmf/common.sh@162 -- # true 00:11:21.483 18:20:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.483 18:20:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.483 18:20:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.483 18:20:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.483 18:20:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.483 18:20:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.483 18:20:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.484 18:20:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:21.484 18:20:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:21.484 18:20:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:21.484 18:20:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:21.484 18:20:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:21.484 18:20:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:21.484 18:20:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.484 18:20:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.484 18:20:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.484 18:20:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:21.743 18:20:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:21.743 18:20:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.743 18:20:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.743 18:20:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.743 18:20:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.743 18:20:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.743 18:20:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:21.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:21.743 00:11:21.743 --- 10.0.0.2 ping statistics --- 00:11:21.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.743 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:21.743 18:20:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:21.743 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.743 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:11:21.743 00:11:21.743 --- 10.0.0.3 ping statistics --- 00:11:21.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.743 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:21.743 18:20:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:21.743 00:11:21.743 --- 10.0.0.1 ping statistics --- 00:11:21.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.743 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:21.743 18:20:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.743 18:20:19 -- nvmf/common.sh@421 -- # return 0 00:11:21.743 18:20:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:21.743 18:20:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.743 18:20:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:21.743 18:20:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:21.743 18:20:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.743 18:20:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:21.743 18:20:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:21.744 18:20:19 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:21.744 18:20:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:21.744 18:20:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.744 18:20:19 -- common/autotest_common.sh@10 -- # set +x 00:11:21.744 18:20:19 -- nvmf/common.sh@469 -- # nvmfpid=76178 00:11:21.744 18:20:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:21.744 18:20:19 -- nvmf/common.sh@470 -- # waitforlisten 76178 00:11:21.744 18:20:19 -- common/autotest_common.sh@829 -- # '[' -z 76178 ']' 00:11:21.744 18:20:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.744 18:20:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.744 18:20:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.744 18:20:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.744 18:20:19 -- common/autotest_common.sh@10 -- # set +x 00:11:21.744 [2024-11-17 18:20:19.888142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:21.744 [2024-11-17 18:20:19.888387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.002 [2024-11-17 18:20:20.023682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.002 [2024-11-17 18:20:20.057483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.002 [2024-11-17 18:20:20.057637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.002 [2024-11-17 18:20:20.057650] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.002 [2024-11-17 18:20:20.057658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.002 [2024-11-17 18:20:20.057682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.002 18:20:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.002 18:20:20 -- common/autotest_common.sh@862 -- # return 0 00:11:22.002 18:20:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:22.002 18:20:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.002 18:20:20 -- common/autotest_common.sh@10 -- # set +x 00:11:22.002 18:20:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.002 18:20:20 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:22.002 18:20:20 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:22.260 true 00:11:22.260 18:20:20 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:22.260 18:20:20 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:22.519 18:20:20 -- target/tls.sh@82 -- # version=0 00:11:22.519 18:20:20 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:22.519 18:20:20 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:22.777 18:20:20 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:22.777 18:20:20 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:23.036 18:20:21 -- target/tls.sh@90 -- # version=13 00:11:23.036 18:20:21 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:23.036 18:20:21 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:23.295 18:20:21 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:23.295 18:20:21 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:23.553 18:20:21 -- target/tls.sh@98 -- # version=7 00:11:23.553 18:20:21 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:23.553 18:20:21 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:23.553 18:20:21 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:23.811 18:20:22 -- target/tls.sh@105 -- # ktls=false 00:11:23.811 18:20:22 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:23.811 18:20:22 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:24.069 18:20:22 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:24.069 18:20:22 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:24.328 18:20:22 -- target/tls.sh@113 -- # ktls=true 00:11:24.328 18:20:22 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:24.328 18:20:22 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:24.589 18:20:22 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:24.589 18:20:22 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:24.848 18:20:23 -- target/tls.sh@121 -- # ktls=false 00:11:24.848 18:20:23 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:24.848 18:20:23 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:24.848 18:20:23 -- target/tls.sh@49 -- # local key hash crc 00:11:24.848 18:20:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:24.848 18:20:23 -- target/tls.sh@51 -- # hash=01 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # tail -c8 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # gzip -1 -c 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # head -c 4 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # crc='p$H�' 00:11:24.848 18:20:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:24.848 18:20:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:24.848 18:20:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:24.848 18:20:23 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:24.848 18:20:23 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:24.848 18:20:23 -- target/tls.sh@49 -- # local key hash crc 00:11:24.848 18:20:23 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:24.848 18:20:23 -- target/tls.sh@51 -- # hash=01 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # gzip -1 -c 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # tail -c8 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # head -c 4 00:11:24.848 18:20:23 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:24.848 18:20:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:24.848 18:20:23 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:24.848 18:20:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:24.848 18:20:23 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:24.848 18:20:23 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:24.848 18:20:23 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:24.848 18:20:23 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:24.848 18:20:23 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:24.848 18:20:23 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:24.848 18:20:23 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:24.848 18:20:23 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:25.108 18:20:23 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:25.367 18:20:23 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:25.367 18:20:23 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:25.367 18:20:23 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:25.626 [2024-11-17 18:20:23.814177] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.626 18:20:23 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:25.885 18:20:24 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:26.144 [2024-11-17 18:20:24.326400] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:26.144 [2024-11-17 18:20:24.326833] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.144 18:20:24 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:26.402 malloc0 00:11:26.402 18:20:24 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:26.660 18:20:24 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.919 18:20:25 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.147 Initializing NVMe Controllers 00:11:39.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.147 Initialization complete. Launching workers. 00:11:39.147 ======================================================== 00:11:39.147 Latency(us) 00:11:39.147 Device Information : IOPS MiB/s Average min max 00:11:39.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11433.49 44.66 5598.91 1091.55 10802.40 00:11:39.147 ======================================================== 00:11:39.147 Total : 11433.49 44.66 5598.91 1091.55 10802.40 00:11:39.147 00:11:39.147 18:20:35 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.148 18:20:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:39.148 18:20:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:39.148 18:20:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:39.148 18:20:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:39.148 18:20:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:39.148 18:20:35 -- target/tls.sh@28 -- # bdevperf_pid=76416 00:11:39.148 18:20:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:39.148 18:20:35 -- target/tls.sh@31 -- # waitforlisten 76416 /var/tmp/bdevperf.sock 00:11:39.148 18:20:35 -- common/autotest_common.sh@829 -- # '[' -z 76416 ']' 00:11:39.148 18:20:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:39.148 18:20:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.148 18:20:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.148 18:20:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.148 18:20:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.148 18:20:35 -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 [2024-11-17 18:20:35.273894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:39.148 [2024-11-17 18:20:35.273989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76416 ] 00:11:39.148 [2024-11-17 18:20:35.414089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.148 [2024-11-17 18:20:35.455776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.148 18:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.148 18:20:36 -- common/autotest_common.sh@862 -- # return 0 00:11:39.148 18:20:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.148 [2024-11-17 18:20:36.496076] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:39.148 TLSTESTn1 00:11:39.148 18:20:36 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:39.148 Running I/O for 10 seconds... 00:11:49.132 00:11:49.132 Latency(us) 00:11:49.132 [2024-11-17T18:20:47.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.132 [2024-11-17T18:20:47.399Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:49.132 Verification LBA range: start 0x0 length 0x2000 00:11:49.132 TLSTESTn1 : 10.01 6088.70 23.78 0.00 0.00 20990.92 4706.68 23831.27 00:11:49.132 [2024-11-17T18:20:47.399Z] =================================================================================================================== 00:11:49.132 [2024-11-17T18:20:47.399Z] Total : 6088.70 23.78 0.00 0.00 20990.92 4706.68 23831.27 00:11:49.132 0 00:11:49.132 18:20:46 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:49.132 18:20:46 -- target/tls.sh@45 -- # killprocess 76416 00:11:49.132 18:20:46 -- common/autotest_common.sh@936 -- # '[' -z 76416 ']' 00:11:49.132 18:20:46 -- common/autotest_common.sh@940 -- # kill -0 76416 00:11:49.132 18:20:46 -- common/autotest_common.sh@941 -- # uname 00:11:49.132 18:20:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:49.132 18:20:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76416 00:11:49.132 killing process with pid 76416 00:11:49.132 Received shutdown signal, test time was about 10.000000 seconds 00:11:49.132 00:11:49.132 Latency(us) 00:11:49.132 [2024-11-17T18:20:47.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.133 [2024-11-17T18:20:47.400Z] =================================================================================================================== 00:11:49.133 [2024-11-17T18:20:47.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:49.133 18:20:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:49.133 18:20:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:49.133 18:20:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76416' 00:11:49.133 18:20:46 -- common/autotest_common.sh@955 -- # kill 76416 00:11:49.133 18:20:46 -- common/autotest_common.sh@960 -- # wait 76416 00:11:49.133 18:20:46 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:49.133 18:20:46 -- common/autotest_common.sh@650 -- # local es=0 00:11:49.133 18:20:46 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:49.133 18:20:46 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:49.133 18:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.133 18:20:46 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:49.133 18:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.133 18:20:46 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:49.133 18:20:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:49.133 18:20:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:49.133 18:20:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:49.133 18:20:46 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:49.133 18:20:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:49.133 18:20:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:49.133 18:20:46 -- target/tls.sh@28 -- # bdevperf_pid=76545 00:11:49.133 18:20:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:49.133 18:20:46 -- target/tls.sh@31 -- # waitforlisten 76545 /var/tmp/bdevperf.sock 00:11:49.133 18:20:46 -- common/autotest_common.sh@829 -- # '[' -z 76545 ']' 00:11:49.133 18:20:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:49.133 18:20:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.133 18:20:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:49.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:49.133 18:20:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.133 18:20:46 -- common/autotest_common.sh@10 -- # set +x 00:11:49.133 [2024-11-17 18:20:46.960564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:49.133 [2024-11-17 18:20:46.960661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76545 ] 00:11:49.133 [2024-11-17 18:20:47.098235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.133 [2024-11-17 18:20:47.131921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.756 18:20:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.756 18:20:47 -- common/autotest_common.sh@862 -- # return 0 00:11:49.756 18:20:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:50.015 [2024-11-17 18:20:48.145165] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:50.015 [2024-11-17 18:20:48.156258] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:50.015 [2024-11-17 18:20:48.157070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ab80 (107): Transport endpoint is not connected 00:11:50.015 [2024-11-17 18:20:48.158046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ab80 (9): Bad file descriptor 00:11:50.015 [2024-11-17 18:20:48.159042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:50.015 [2024-11-17 18:20:48.159236] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:50.015 [2024-11-17 18:20:48.159386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:50.015 request: 00:11:50.015 { 00:11:50.015 "name": "TLSTEST", 00:11:50.015 "trtype": "tcp", 00:11:50.015 "traddr": "10.0.0.2", 00:11:50.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:50.016 "adrfam": "ipv4", 00:11:50.016 "trsvcid": "4420", 00:11:50.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.016 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:50.016 "method": "bdev_nvme_attach_controller", 00:11:50.016 "req_id": 1 00:11:50.016 } 00:11:50.016 Got JSON-RPC error response 00:11:50.016 response: 00:11:50.016 { 00:11:50.016 "code": -32602, 00:11:50.016 "message": "Invalid parameters" 00:11:50.016 } 00:11:50.016 18:20:48 -- target/tls.sh@36 -- # killprocess 76545 00:11:50.016 18:20:48 -- common/autotest_common.sh@936 -- # '[' -z 76545 ']' 00:11:50.016 18:20:48 -- common/autotest_common.sh@940 -- # kill -0 76545 00:11:50.016 18:20:48 -- common/autotest_common.sh@941 -- # uname 00:11:50.016 18:20:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.016 18:20:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76545 00:11:50.016 killing process with pid 76545 00:11:50.016 Received shutdown signal, test time was about 10.000000 seconds 00:11:50.016 00:11:50.016 Latency(us) 00:11:50.016 [2024-11-17T18:20:48.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.016 [2024-11-17T18:20:48.283Z] =================================================================================================================== 00:11:50.016 [2024-11-17T18:20:48.283Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.016 18:20:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:50.016 18:20:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:50.016 18:20:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76545' 00:11:50.016 18:20:48 -- common/autotest_common.sh@955 -- # kill 76545 00:11:50.016 18:20:48 -- common/autotest_common.sh@960 -- # wait 76545 00:11:50.275 18:20:48 -- target/tls.sh@37 -- # return 1 00:11:50.275 18:20:48 -- common/autotest_common.sh@653 -- # es=1 00:11:50.275 18:20:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:50.275 18:20:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:50.275 18:20:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:50.275 18:20:48 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.275 18:20:48 -- common/autotest_common.sh@650 -- # local es=0 00:11:50.275 18:20:48 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.275 18:20:48 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:50.275 18:20:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.275 18:20:48 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:50.275 18:20:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.275 18:20:48 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:50.275 18:20:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:50.275 18:20:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:50.275 18:20:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:50.275 18:20:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:50.275 18:20:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:50.275 18:20:48 -- target/tls.sh@28 -- # bdevperf_pid=76573 00:11:50.275 18:20:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:50.275 18:20:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:50.275 18:20:48 -- target/tls.sh@31 -- # waitforlisten 76573 /var/tmp/bdevperf.sock 00:11:50.275 18:20:48 -- common/autotest_common.sh@829 -- # '[' -z 76573 ']' 00:11:50.275 18:20:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.275 18:20:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.275 18:20:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.275 18:20:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.275 18:20:48 -- common/autotest_common.sh@10 -- # set +x 00:11:50.275 [2024-11-17 18:20:48.397479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:50.275 [2024-11-17 18:20:48.397784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76573 ] 00:11:50.275 [2024-11-17 18:20:48.538724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.534 [2024-11-17 18:20:48.574034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.475 18:20:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.475 18:20:49 -- common/autotest_common.sh@862 -- # return 0 00:11:51.475 18:20:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:51.475 [2024-11-17 18:20:49.708205] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:51.475 [2024-11-17 18:20:49.716480] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:51.475 [2024-11-17 18:20:49.716743] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:51.475 [2024-11-17 18:20:49.716951] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-11-17 18:20:49.716960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e8b80 (107): Transport endpoint is not connected 00:11:51.475 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:51.475 [2024-11-17 18:20:49.717949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e8b80 (9): Bad file descriptor 00:11:51.475 [2024-11-17 18:20:49.718947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:51.475 [2024-11-17 18:20:49.719122] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:51.475 [2024-11-17 18:20:49.719236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:51.475 request: 00:11:51.475 { 00:11:51.475 "name": "TLSTEST", 00:11:51.475 "trtype": "tcp", 00:11:51.475 "traddr": "10.0.0.2", 00:11:51.475 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:51.475 "adrfam": "ipv4", 00:11:51.475 "trsvcid": "4420", 00:11:51.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.475 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:51.475 "method": "bdev_nvme_attach_controller", 00:11:51.475 "req_id": 1 00:11:51.475 } 00:11:51.475 Got JSON-RPC error response 00:11:51.475 response: 00:11:51.475 { 00:11:51.475 "code": -32602, 00:11:51.475 "message": "Invalid parameters" 00:11:51.475 } 00:11:51.475 18:20:49 -- target/tls.sh@36 -- # killprocess 76573 00:11:51.475 18:20:49 -- common/autotest_common.sh@936 -- # '[' -z 76573 ']' 00:11:51.475 18:20:49 -- common/autotest_common.sh@940 -- # kill -0 76573 00:11:51.734 18:20:49 -- common/autotest_common.sh@941 -- # uname 00:11:51.734 18:20:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.734 18:20:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76573 00:11:51.734 killing process with pid 76573 00:11:51.734 Received shutdown signal, test time was about 10.000000 seconds 00:11:51.734 00:11:51.734 Latency(us) 00:11:51.734 [2024-11-17T18:20:50.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.734 [2024-11-17T18:20:50.001Z] =================================================================================================================== 00:11:51.734 [2024-11-17T18:20:50.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:51.734 18:20:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:51.734 18:20:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:51.734 18:20:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76573' 00:11:51.734 18:20:49 -- common/autotest_common.sh@955 -- # kill 76573 00:11:51.734 18:20:49 -- common/autotest_common.sh@960 -- # wait 76573 00:11:51.734 18:20:49 -- target/tls.sh@37 -- # return 1 00:11:51.734 18:20:49 -- common/autotest_common.sh@653 -- # es=1 00:11:51.734 18:20:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:51.734 18:20:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:51.734 18:20:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:51.734 18:20:49 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:51.734 18:20:49 -- common/autotest_common.sh@650 -- # local es=0 00:11:51.734 18:20:49 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:51.734 18:20:49 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:51.734 18:20:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.734 18:20:49 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:51.735 18:20:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.735 18:20:49 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:51.735 18:20:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:51.735 18:20:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:51.735 18:20:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:51.735 18:20:49 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:51.735 18:20:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:51.735 18:20:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:51.735 18:20:49 -- target/tls.sh@28 -- # bdevperf_pid=76601 00:11:51.735 18:20:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:51.735 18:20:49 -- target/tls.sh@31 -- # waitforlisten 76601 /var/tmp/bdevperf.sock 00:11:51.735 18:20:49 -- common/autotest_common.sh@829 -- # '[' -z 76601 ']' 00:11:51.735 18:20:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:51.735 18:20:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.735 18:20:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:51.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:51.735 18:20:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.735 18:20:49 -- common/autotest_common.sh@10 -- # set +x 00:11:51.735 [2024-11-17 18:20:49.952733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:51.735 [2024-11-17 18:20:49.952992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76601 ] 00:11:51.994 [2024-11-17 18:20:50.086552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.994 [2024-11-17 18:20:50.123134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.994 18:20:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.994 18:20:50 -- common/autotest_common.sh@862 -- # return 0 00:11:51.994 18:20:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:52.253 [2024-11-17 18:20:50.404305] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:52.253 [2024-11-17 18:20:50.409168] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:52.253 [2024-11-17 18:20:50.409411] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:52.253 [2024-11-17 18:20:50.409489] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:52.253 [2024-11-17 18:20:50.409931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1704b80 (107): Transport endpoint is not connected 00:11:52.253 [2024-11-17 18:20:50.410933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1704b80 (9): Bad file descriptor 00:11:52.253 [2024-11-17 18:20:50.411916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:52.253 [2024-11-17 18:20:50.411938] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:52.253 [2024-11-17 18:20:50.411963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:52.253 request: 00:11:52.253 { 00:11:52.253 "name": "TLSTEST", 00:11:52.253 "trtype": "tcp", 00:11:52.253 "traddr": "10.0.0.2", 00:11:52.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.253 "adrfam": "ipv4", 00:11:52.253 "trsvcid": "4420", 00:11:52.253 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:52.253 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:52.253 "method": "bdev_nvme_attach_controller", 00:11:52.253 "req_id": 1 00:11:52.253 } 00:11:52.253 Got JSON-RPC error response 00:11:52.253 response: 00:11:52.253 { 00:11:52.253 "code": -32602, 00:11:52.253 "message": "Invalid parameters" 00:11:52.253 } 00:11:52.253 18:20:50 -- target/tls.sh@36 -- # killprocess 76601 00:11:52.253 18:20:50 -- common/autotest_common.sh@936 -- # '[' -z 76601 ']' 00:11:52.253 18:20:50 -- common/autotest_common.sh@940 -- # kill -0 76601 00:11:52.253 18:20:50 -- common/autotest_common.sh@941 -- # uname 00:11:52.253 18:20:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.253 18:20:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76601 00:11:52.253 18:20:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:52.253 18:20:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:52.253 18:20:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76601' 00:11:52.253 killing process with pid 76601 00:11:52.253 18:20:50 -- common/autotest_common.sh@955 -- # kill 76601 00:11:52.253 Received shutdown signal, test time was about 10.000000 seconds 00:11:52.253 00:11:52.253 Latency(us) 00:11:52.253 [2024-11-17T18:20:50.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.253 [2024-11-17T18:20:50.520Z] =================================================================================================================== 00:11:52.253 [2024-11-17T18:20:50.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:52.253 18:20:50 -- common/autotest_common.sh@960 -- # wait 76601 00:11:52.513 18:20:50 -- target/tls.sh@37 -- # return 1 00:11:52.513 18:20:50 -- common/autotest_common.sh@653 -- # es=1 00:11:52.513 18:20:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:52.513 18:20:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:52.513 18:20:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:52.513 18:20:50 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:52.513 18:20:50 -- common/autotest_common.sh@650 -- # local es=0 00:11:52.513 18:20:50 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:52.513 18:20:50 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:52.513 18:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:52.513 18:20:50 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:52.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.513 18:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:52.513 18:20:50 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:52.513 18:20:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:52.513 18:20:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:52.513 18:20:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:52.513 18:20:50 -- target/tls.sh@23 -- # psk= 00:11:52.513 18:20:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:52.513 18:20:50 -- target/tls.sh@28 -- # bdevperf_pid=76620 00:11:52.513 18:20:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:52.513 18:20:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:52.513 18:20:50 -- target/tls.sh@31 -- # waitforlisten 76620 /var/tmp/bdevperf.sock 00:11:52.513 18:20:50 -- common/autotest_common.sh@829 -- # '[' -z 76620 ']' 00:11:52.513 18:20:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.513 18:20:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.513 18:20:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.513 18:20:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.513 18:20:50 -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 [2024-11-17 18:20:50.652755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:52.513 [2024-11-17 18:20:50.653006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76620 ] 00:11:52.772 [2024-11-17 18:20:50.788364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.772 [2024-11-17 18:20:50.822407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.772 18:20:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.772 18:20:50 -- common/autotest_common.sh@862 -- # return 0 00:11:52.772 18:20:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:53.031 [2024-11-17 18:20:51.116566] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:53.031 [2024-11-17 18:20:51.118106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1343450 (9): Bad file descriptor 00:11:53.031 [2024-11-17 18:20:51.119102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:53.031 [2024-11-17 18:20:51.119319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:53.031 [2024-11-17 18:20:51.119433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:53.031 request: 00:11:53.031 { 00:11:53.031 "name": "TLSTEST", 00:11:53.031 "trtype": "tcp", 00:11:53.031 "traddr": "10.0.0.2", 00:11:53.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:53.031 "adrfam": "ipv4", 00:11:53.031 "trsvcid": "4420", 00:11:53.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:53.031 "method": "bdev_nvme_attach_controller", 00:11:53.031 "req_id": 1 00:11:53.031 } 00:11:53.031 Got JSON-RPC error response 00:11:53.031 response: 00:11:53.031 { 00:11:53.031 "code": -32602, 00:11:53.031 "message": "Invalid parameters" 00:11:53.031 } 00:11:53.031 18:20:51 -- target/tls.sh@36 -- # killprocess 76620 00:11:53.031 18:20:51 -- common/autotest_common.sh@936 -- # '[' -z 76620 ']' 00:11:53.031 18:20:51 -- common/autotest_common.sh@940 -- # kill -0 76620 00:11:53.031 18:20:51 -- common/autotest_common.sh@941 -- # uname 00:11:53.031 18:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.031 18:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76620 00:11:53.031 18:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:53.031 18:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:53.031 18:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76620' 00:11:53.031 killing process with pid 76620 00:11:53.031 Received shutdown signal, test time was about 10.000000 seconds 00:11:53.031 00:11:53.031 Latency(us) 00:11:53.031 [2024-11-17T18:20:51.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.031 [2024-11-17T18:20:51.298Z] =================================================================================================================== 00:11:53.031 [2024-11-17T18:20:51.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:53.031 18:20:51 -- common/autotest_common.sh@955 -- # kill 76620 00:11:53.031 18:20:51 -- common/autotest_common.sh@960 -- # wait 76620 00:11:53.290 18:20:51 -- target/tls.sh@37 -- # return 1 00:11:53.290 18:20:51 -- common/autotest_common.sh@653 -- # es=1 00:11:53.290 18:20:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:53.290 18:20:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:53.290 18:20:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:53.290 18:20:51 -- target/tls.sh@167 -- # killprocess 76178 00:11:53.290 18:20:51 -- common/autotest_common.sh@936 -- # '[' -z 76178 ']' 00:11:53.290 18:20:51 -- common/autotest_common.sh@940 -- # kill -0 76178 00:11:53.290 18:20:51 -- common/autotest_common.sh@941 -- # uname 00:11:53.290 18:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.290 18:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76178 00:11:53.290 killing process with pid 76178 00:11:53.290 18:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:53.290 18:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:53.290 18:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76178' 00:11:53.290 18:20:51 -- common/autotest_common.sh@955 -- # kill 76178 00:11:53.290 18:20:51 -- common/autotest_common.sh@960 -- # wait 76178 00:11:53.290 18:20:51 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:53.290 18:20:51 -- target/tls.sh@49 -- # local key hash crc 00:11:53.290 18:20:51 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:53.290 18:20:51 -- target/tls.sh@51 -- # hash=02 00:11:53.290 18:20:51 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:53.290 18:20:51 -- target/tls.sh@52 -- # gzip -1 -c 00:11:53.290 18:20:51 -- target/tls.sh@52 -- # tail -c8 00:11:53.290 18:20:51 -- target/tls.sh@52 -- # head -c 4 00:11:53.290 18:20:51 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:53.291 18:20:51 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:53.291 18:20:51 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:53.291 18:20:51 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:53.291 18:20:51 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:53.291 18:20:51 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:53.291 18:20:51 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:53.291 18:20:51 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:53.291 18:20:51 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:53.291 18:20:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:53.291 18:20:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.291 18:20:51 -- common/autotest_common.sh@10 -- # set +x 00:11:53.291 18:20:51 -- nvmf/common.sh@469 -- # nvmfpid=76651 00:11:53.291 18:20:51 -- nvmf/common.sh@470 -- # waitforlisten 76651 00:11:53.291 18:20:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:53.291 18:20:51 -- common/autotest_common.sh@829 -- # '[' -z 76651 ']' 00:11:53.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.291 18:20:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.291 18:20:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.291 18:20:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.291 18:20:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.291 18:20:51 -- common/autotest_common.sh@10 -- # set +x 00:11:53.550 [2024-11-17 18:20:51.561056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:53.550 [2024-11-17 18:20:51.561165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.550 [2024-11-17 18:20:51.693877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.550 [2024-11-17 18:20:51.727702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:53.550 [2024-11-17 18:20:51.727871] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.550 [2024-11-17 18:20:51.727883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.550 [2024-11-17 18:20:51.727891] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.550 [2024-11-17 18:20:51.727918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.487 18:20:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.487 18:20:52 -- common/autotest_common.sh@862 -- # return 0 00:11:54.487 18:20:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:54.487 18:20:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:54.487 18:20:52 -- common/autotest_common.sh@10 -- # set +x 00:11:54.487 18:20:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.487 18:20:52 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:54.487 18:20:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:54.487 18:20:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:54.746 [2024-11-17 18:20:52.800269] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.746 18:20:52 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:55.005 18:20:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:55.005 [2024-11-17 18:20:53.256392] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:55.005 [2024-11-17 18:20:53.256644] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.265 18:20:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:55.265 malloc0 00:11:55.265 18:20:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:55.524 18:20:53 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:55.784 18:20:53 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:55.784 18:20:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:55.784 18:20:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:55.784 18:20:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:55.784 18:20:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:55.784 18:20:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:55.784 18:20:53 -- target/tls.sh@28 -- # bdevperf_pid=76707 00:11:55.784 18:20:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:55.784 18:20:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:55.784 18:20:53 -- target/tls.sh@31 -- # waitforlisten 76707 /var/tmp/bdevperf.sock 00:11:55.784 18:20:53 -- common/autotest_common.sh@829 -- # '[' -z 76707 ']' 00:11:55.784 18:20:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:55.784 18:20:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.784 18:20:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:55.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:55.784 18:20:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.784 18:20:53 -- common/autotest_common.sh@10 -- # set +x 00:11:56.043 [2024-11-17 18:20:54.057412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.043 [2024-11-17 18:20:54.057556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76707 ] 00:11:56.043 [2024-11-17 18:20:54.205998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.043 [2024-11-17 18:20:54.247182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.982 18:20:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.982 18:20:55 -- common/autotest_common.sh@862 -- # return 0 00:11:56.982 18:20:55 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:57.241 [2024-11-17 18:20:55.261168] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:57.241 TLSTESTn1 00:11:57.241 18:20:55 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:57.241 Running I/O for 10 seconds... 00:12:07.258 00:12:07.258 Latency(us) 00:12:07.258 [2024-11-17T18:21:05.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.259 [2024-11-17T18:21:05.526Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:07.259 Verification LBA range: start 0x0 length 0x2000 00:12:07.259 TLSTESTn1 : 10.01 5836.41 22.80 0.00 0.00 21895.59 4110.89 20852.36 00:12:07.259 [2024-11-17T18:21:05.526Z] =================================================================================================================== 00:12:07.259 [2024-11-17T18:21:05.526Z] Total : 5836.41 22.80 0.00 0.00 21895.59 4110.89 20852.36 00:12:07.259 0 00:12:07.259 18:21:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.259 18:21:05 -- target/tls.sh@45 -- # killprocess 76707 00:12:07.259 18:21:05 -- common/autotest_common.sh@936 -- # '[' -z 76707 ']' 00:12:07.259 18:21:05 -- common/autotest_common.sh@940 -- # kill -0 76707 00:12:07.259 18:21:05 -- common/autotest_common.sh@941 -- # uname 00:12:07.259 18:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.517 18:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76707 00:12:07.517 killing process with pid 76707 00:12:07.517 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.517 00:12:07.517 Latency(us) 00:12:07.517 [2024-11-17T18:21:05.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.517 [2024-11-17T18:21:05.784Z] =================================================================================================================== 00:12:07.517 [2024-11-17T18:21:05.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:07.517 18:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:07.517 18:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:07.517 18:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76707' 00:12:07.517 18:21:05 -- common/autotest_common.sh@955 -- # kill 76707 00:12:07.517 18:21:05 -- common/autotest_common.sh@960 -- # wait 76707 00:12:07.517 18:21:05 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.517 18:21:05 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.517 18:21:05 -- common/autotest_common.sh@650 -- # local es=0 00:12:07.517 18:21:05 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.517 18:21:05 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:07.517 18:21:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.517 18:21:05 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:07.517 18:21:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.517 18:21:05 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.517 18:21:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:07.517 18:21:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:07.517 18:21:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:07.517 18:21:05 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:07.517 18:21:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:07.517 18:21:05 -- target/tls.sh@28 -- # bdevperf_pid=76842 00:12:07.517 18:21:05 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:07.517 18:21:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:07.517 18:21:05 -- target/tls.sh@31 -- # waitforlisten 76842 /var/tmp/bdevperf.sock 00:12:07.517 18:21:05 -- common/autotest_common.sh@829 -- # '[' -z 76842 ']' 00:12:07.517 18:21:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.517 18:21:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.517 18:21:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.517 18:21:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.517 18:21:05 -- common/autotest_common.sh@10 -- # set +x 00:12:07.517 [2024-11-17 18:21:05.767866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:07.517 [2024-11-17 18:21:05.767960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76842 ] 00:12:07.775 [2024-11-17 18:21:05.899572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.775 [2024-11-17 18:21:05.934429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.775 18:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.775 18:21:06 -- common/autotest_common.sh@862 -- # return 0 00:12:07.775 18:21:06 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:08.033 [2024-11-17 18:21:06.262557] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:08.033 [2024-11-17 18:21:06.262629] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:08.033 request: 00:12:08.033 { 00:12:08.033 "name": "TLSTEST", 00:12:08.033 "trtype": "tcp", 00:12:08.033 "traddr": "10.0.0.2", 00:12:08.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:08.033 "adrfam": "ipv4", 00:12:08.033 "trsvcid": "4420", 00:12:08.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.033 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:08.033 "method": "bdev_nvme_attach_controller", 00:12:08.033 "req_id": 1 00:12:08.033 } 00:12:08.033 Got JSON-RPC error response 00:12:08.033 response: 00:12:08.033 { 00:12:08.033 "code": -22, 00:12:08.033 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:08.033 } 00:12:08.033 18:21:06 -- target/tls.sh@36 -- # killprocess 76842 00:12:08.033 18:21:06 -- common/autotest_common.sh@936 -- # '[' -z 76842 ']' 00:12:08.033 18:21:06 -- common/autotest_common.sh@940 -- # kill -0 76842 00:12:08.033 18:21:06 -- common/autotest_common.sh@941 -- # uname 00:12:08.033 18:21:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.033 18:21:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76842 00:12:08.291 killing process with pid 76842 00:12:08.291 Received shutdown signal, test time was about 10.000000 seconds 00:12:08.291 00:12:08.291 Latency(us) 00:12:08.291 [2024-11-17T18:21:06.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.291 [2024-11-17T18:21:06.558Z] =================================================================================================================== 00:12:08.291 [2024-11-17T18:21:06.558Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:08.291 18:21:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:08.291 18:21:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:08.291 18:21:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76842' 00:12:08.291 18:21:06 -- common/autotest_common.sh@955 -- # kill 76842 00:12:08.291 18:21:06 -- common/autotest_common.sh@960 -- # wait 76842 00:12:08.291 18:21:06 -- target/tls.sh@37 -- # return 1 00:12:08.291 18:21:06 -- common/autotest_common.sh@653 -- # es=1 00:12:08.291 18:21:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:08.291 18:21:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:08.291 18:21:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:08.291 18:21:06 -- target/tls.sh@183 -- # killprocess 76651 00:12:08.291 18:21:06 -- common/autotest_common.sh@936 -- # '[' -z 76651 ']' 00:12:08.291 18:21:06 -- common/autotest_common.sh@940 -- # kill -0 76651 00:12:08.291 18:21:06 -- common/autotest_common.sh@941 -- # uname 00:12:08.291 18:21:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.291 18:21:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76651 00:12:08.291 killing process with pid 76651 00:12:08.291 18:21:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:08.291 18:21:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:08.291 18:21:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76651' 00:12:08.291 18:21:06 -- common/autotest_common.sh@955 -- # kill 76651 00:12:08.291 18:21:06 -- common/autotest_common.sh@960 -- # wait 76651 00:12:08.550 18:21:06 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:08.550 18:21:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:08.550 18:21:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.550 18:21:06 -- common/autotest_common.sh@10 -- # set +x 00:12:08.550 18:21:06 -- nvmf/common.sh@469 -- # nvmfpid=76867 00:12:08.550 18:21:06 -- nvmf/common.sh@470 -- # waitforlisten 76867 00:12:08.550 18:21:06 -- common/autotest_common.sh@829 -- # '[' -z 76867 ']' 00:12:08.550 18:21:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.550 18:21:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:08.550 18:21:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.550 18:21:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.550 18:21:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.550 18:21:06 -- common/autotest_common.sh@10 -- # set +x 00:12:08.550 [2024-11-17 18:21:06.690509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:08.550 [2024-11-17 18:21:06.691034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.808 [2024-11-17 18:21:06.827684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.808 [2024-11-17 18:21:06.864986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.808 [2024-11-17 18:21:06.865148] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.808 [2024-11-17 18:21:06.865163] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.808 [2024-11-17 18:21:06.865174] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.808 [2024-11-17 18:21:06.865219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.808 18:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.808 18:21:06 -- common/autotest_common.sh@862 -- # return 0 00:12:08.808 18:21:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:08.808 18:21:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.808 18:21:06 -- common/autotest_common.sh@10 -- # set +x 00:12:08.808 18:21:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.808 18:21:06 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:08.808 18:21:06 -- common/autotest_common.sh@650 -- # local es=0 00:12:08.808 18:21:06 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:08.808 18:21:06 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:12:08.808 18:21:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.809 18:21:06 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:12:08.809 18:21:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.809 18:21:06 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:08.809 18:21:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:08.809 18:21:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:09.067 [2024-11-17 18:21:07.262032] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.067 18:21:07 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:09.326 18:21:07 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:09.584 [2024-11-17 18:21:07.806178] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:09.584 [2024-11-17 18:21:07.806441] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.584 18:21:07 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:09.843 malloc0 00:12:09.843 18:21:08 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:10.410 18:21:08 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:10.410 [2024-11-17 18:21:08.596782] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:10.411 [2024-11-17 18:21:08.596828] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:10.411 [2024-11-17 18:21:08.596848] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:10.411 request: 00:12:10.411 { 00:12:10.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.411 "host": "nqn.2016-06.io.spdk:host1", 00:12:10.411 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:10.411 "method": "nvmf_subsystem_add_host", 00:12:10.411 "req_id": 1 00:12:10.411 } 00:12:10.411 Got JSON-RPC error response 00:12:10.411 response: 00:12:10.411 { 00:12:10.411 "code": -32603, 00:12:10.411 "message": "Internal error" 00:12:10.411 } 00:12:10.411 18:21:08 -- common/autotest_common.sh@653 -- # es=1 00:12:10.411 18:21:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:10.411 18:21:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:10.411 18:21:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:10.411 18:21:08 -- target/tls.sh@189 -- # killprocess 76867 00:12:10.411 18:21:08 -- common/autotest_common.sh@936 -- # '[' -z 76867 ']' 00:12:10.411 18:21:08 -- common/autotest_common.sh@940 -- # kill -0 76867 00:12:10.411 18:21:08 -- common/autotest_common.sh@941 -- # uname 00:12:10.411 18:21:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.411 18:21:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76867 00:12:10.411 18:21:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:10.411 killing process with pid 76867 00:12:10.411 18:21:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:10.411 18:21:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76867' 00:12:10.411 18:21:08 -- common/autotest_common.sh@955 -- # kill 76867 00:12:10.411 18:21:08 -- common/autotest_common.sh@960 -- # wait 76867 00:12:10.669 18:21:08 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:10.669 18:21:08 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:10.669 18:21:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:10.669 18:21:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.669 18:21:08 -- common/autotest_common.sh@10 -- # set +x 00:12:10.669 18:21:08 -- nvmf/common.sh@469 -- # nvmfpid=76922 00:12:10.669 18:21:08 -- nvmf/common.sh@470 -- # waitforlisten 76922 00:12:10.669 18:21:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:10.669 18:21:08 -- common/autotest_common.sh@829 -- # '[' -z 76922 ']' 00:12:10.669 18:21:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.669 18:21:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.669 18:21:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.669 18:21:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.669 18:21:08 -- common/autotest_common.sh@10 -- # set +x 00:12:10.669 [2024-11-17 18:21:08.859692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:10.669 [2024-11-17 18:21:08.859782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.927 [2024-11-17 18:21:08.996596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.927 [2024-11-17 18:21:09.029748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.927 [2024-11-17 18:21:09.029896] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.927 [2024-11-17 18:21:09.029910] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.927 [2024-11-17 18:21:09.029919] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.927 [2024-11-17 18:21:09.029951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.927 18:21:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.927 18:21:09 -- common/autotest_common.sh@862 -- # return 0 00:12:10.927 18:21:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:10.927 18:21:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.927 18:21:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.927 18:21:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.927 18:21:09 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:10.927 18:21:09 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:10.927 18:21:09 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:11.184 [2024-11-17 18:21:09.405586] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.184 18:21:09 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:11.750 18:21:09 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:11.750 [2024-11-17 18:21:09.993746] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:11.750 [2024-11-17 18:21:09.993986] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.750 18:21:10 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:12.316 malloc0 00:12:12.316 18:21:10 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:12.316 18:21:10 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:12.883 18:21:10 -- target/tls.sh@197 -- # bdevperf_pid=76969 00:12:12.883 18:21:10 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:12.883 18:21:10 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:12.883 18:21:10 -- target/tls.sh@200 -- # waitforlisten 76969 /var/tmp/bdevperf.sock 00:12:12.883 18:21:10 -- common/autotest_common.sh@829 -- # '[' -z 76969 ']' 00:12:12.883 18:21:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:12.883 18:21:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:12.883 18:21:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:12.883 18:21:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.883 18:21:10 -- common/autotest_common.sh@10 -- # set +x 00:12:12.883 [2024-11-17 18:21:10.905480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:12.883 [2024-11-17 18:21:10.905585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76969 ] 00:12:12.883 [2024-11-17 18:21:11.039435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.883 [2024-11-17 18:21:11.074578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.142 18:21:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.142 18:21:11 -- common/autotest_common.sh@862 -- # return 0 00:12:13.142 18:21:11 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:13.142 [2024-11-17 18:21:11.404579] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:13.401 TLSTESTn1 00:12:13.401 18:21:11 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:13.659 18:21:11 -- target/tls.sh@205 -- # tgtconf='{ 00:12:13.659 "subsystems": [ 00:12:13.659 { 00:12:13.659 "subsystem": "iobuf", 00:12:13.659 "config": [ 00:12:13.659 { 00:12:13.659 "method": "iobuf_set_options", 00:12:13.659 "params": { 00:12:13.659 "small_pool_count": 8192, 00:12:13.659 "large_pool_count": 1024, 00:12:13.659 "small_bufsize": 8192, 00:12:13.659 "large_bufsize": 135168 00:12:13.659 } 00:12:13.659 } 00:12:13.659 ] 00:12:13.659 }, 00:12:13.659 { 00:12:13.659 "subsystem": "sock", 00:12:13.659 "config": [ 00:12:13.659 { 00:12:13.659 "method": "sock_impl_set_options", 00:12:13.659 "params": { 00:12:13.659 "impl_name": "uring", 00:12:13.659 "recv_buf_size": 2097152, 00:12:13.659 "send_buf_size": 2097152, 00:12:13.660 "enable_recv_pipe": true, 00:12:13.660 "enable_quickack": false, 00:12:13.660 "enable_placement_id": 0, 00:12:13.660 "enable_zerocopy_send_server": false, 00:12:13.660 "enable_zerocopy_send_client": false, 00:12:13.660 "zerocopy_threshold": 0, 00:12:13.660 "tls_version": 0, 00:12:13.660 "enable_ktls": false 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "sock_impl_set_options", 00:12:13.660 "params": { 00:12:13.660 "impl_name": "posix", 00:12:13.660 "recv_buf_size": 2097152, 00:12:13.660 "send_buf_size": 2097152, 00:12:13.660 "enable_recv_pipe": true, 00:12:13.660 "enable_quickack": false, 00:12:13.660 "enable_placement_id": 0, 00:12:13.660 "enable_zerocopy_send_server": true, 00:12:13.660 "enable_zerocopy_send_client": false, 00:12:13.660 "zerocopy_threshold": 0, 00:12:13.660 "tls_version": 0, 00:12:13.660 "enable_ktls": false 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "sock_impl_set_options", 00:12:13.660 "params": { 00:12:13.660 "impl_name": "ssl", 00:12:13.660 "recv_buf_size": 4096, 00:12:13.660 "send_buf_size": 4096, 00:12:13.660 "enable_recv_pipe": true, 00:12:13.660 "enable_quickack": false, 00:12:13.660 "enable_placement_id": 0, 00:12:13.660 "enable_zerocopy_send_server": true, 00:12:13.660 "enable_zerocopy_send_client": false, 00:12:13.660 "zerocopy_threshold": 0, 00:12:13.660 "tls_version": 0, 00:12:13.660 "enable_ktls": false 00:12:13.660 } 00:12:13.660 } 00:12:13.660 ] 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "subsystem": "vmd", 00:12:13.660 "config": [] 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "subsystem": "accel", 00:12:13.660 "config": [ 00:12:13.660 { 00:12:13.660 "method": "accel_set_options", 00:12:13.660 "params": { 00:12:13.660 "small_cache_size": 128, 00:12:13.660 "large_cache_size": 16, 00:12:13.660 "task_count": 2048, 00:12:13.660 "sequence_count": 2048, 00:12:13.660 "buf_count": 2048 00:12:13.660 } 00:12:13.660 } 00:12:13.660 ] 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "subsystem": "bdev", 00:12:13.660 "config": [ 00:12:13.660 { 00:12:13.660 "method": "bdev_set_options", 00:12:13.660 "params": { 00:12:13.660 "bdev_io_pool_size": 65535, 00:12:13.660 "bdev_io_cache_size": 256, 00:12:13.660 "bdev_auto_examine": true, 00:12:13.660 "iobuf_small_cache_size": 128, 00:12:13.660 "iobuf_large_cache_size": 16 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "bdev_raid_set_options", 00:12:13.660 "params": { 00:12:13.660 "process_window_size_kb": 1024 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "bdev_iscsi_set_options", 00:12:13.660 "params": { 00:12:13.660 "timeout_sec": 30 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "bdev_nvme_set_options", 00:12:13.660 "params": { 00:12:13.660 "action_on_timeout": "none", 00:12:13.660 "timeout_us": 0, 00:12:13.660 "timeout_admin_us": 0, 00:12:13.660 "keep_alive_timeout_ms": 10000, 00:12:13.660 "transport_retry_count": 4, 00:12:13.660 "arbitration_burst": 0, 00:12:13.660 "low_priority_weight": 0, 00:12:13.660 "medium_priority_weight": 0, 00:12:13.660 "high_priority_weight": 0, 00:12:13.660 "nvme_adminq_poll_period_us": 10000, 00:12:13.660 "nvme_ioq_poll_period_us": 0, 00:12:13.660 "io_queue_requests": 0, 00:12:13.660 "delay_cmd_submit": true, 00:12:13.660 "bdev_retry_count": 3, 00:12:13.660 "transport_ack_timeout": 0, 00:12:13.660 "ctrlr_loss_timeout_sec": 0, 00:12:13.660 "reconnect_delay_sec": 0, 00:12:13.660 "fast_io_fail_timeout_sec": 0, 00:12:13.660 "generate_uuids": false, 00:12:13.660 "transport_tos": 0, 00:12:13.660 "io_path_stat": false, 00:12:13.660 "allow_accel_sequence": false 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "bdev_nvme_set_hotplug", 00:12:13.660 "params": { 00:12:13.660 "period_us": 100000, 00:12:13.660 "enable": false 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "bdev_malloc_create", 00:12:13.660 "params": { 00:12:13.660 "name": "malloc0", 00:12:13.660 "num_blocks": 8192, 00:12:13.660 "block_size": 4096, 00:12:13.660 "physical_block_size": 4096, 00:12:13.660 "uuid": "401ba919-516b-4f68-84d4-dd7aa82f5011", 00:12:13.660 "optimal_io_boundary": 0 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "bdev_wait_for_examine" 00:12:13.660 } 00:12:13.660 ] 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "subsystem": "nbd", 00:12:13.660 "config": [] 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "subsystem": "scheduler", 00:12:13.660 "config": [ 00:12:13.660 { 00:12:13.660 "method": "framework_set_scheduler", 00:12:13.660 "params": { 00:12:13.660 "name": "static" 00:12:13.660 } 00:12:13.660 } 00:12:13.660 ] 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "subsystem": "nvmf", 00:12:13.660 "config": [ 00:12:13.660 { 00:12:13.660 "method": "nvmf_set_config", 00:12:13.660 "params": { 00:12:13.660 "discovery_filter": "match_any", 00:12:13.660 "admin_cmd_passthru": { 00:12:13.660 "identify_ctrlr": false 00:12:13.660 } 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "nvmf_set_max_subsystems", 00:12:13.660 "params": { 00:12:13.660 "max_subsystems": 1024 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "nvmf_set_crdt", 00:12:13.660 "params": { 00:12:13.660 "crdt1": 0, 00:12:13.660 "crdt2": 0, 00:12:13.660 "crdt3": 0 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "nvmf_create_transport", 00:12:13.660 "params": { 00:12:13.660 "trtype": "TCP", 00:12:13.660 "max_queue_depth": 128, 00:12:13.660 "max_io_qpairs_per_ctrlr": 127, 00:12:13.660 "in_capsule_data_size": 4096, 00:12:13.660 "max_io_size": 131072, 00:12:13.660 "io_unit_size": 131072, 00:12:13.660 "max_aq_depth": 128, 00:12:13.660 "num_shared_buffers": 511, 00:12:13.660 "buf_cache_size": 4294967295, 00:12:13.660 "dif_insert_or_strip": false, 00:12:13.660 "zcopy": false, 00:12:13.660 "c2h_success": false, 00:12:13.660 "sock_priority": 0, 00:12:13.660 "abort_timeout_sec": 1 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "nvmf_create_subsystem", 00:12:13.660 "params": { 00:12:13.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.660 "allow_any_host": false, 00:12:13.660 "serial_number": "SPDK00000000000001", 00:12:13.660 "model_number": "SPDK bdev Controller", 00:12:13.660 "max_namespaces": 10, 00:12:13.660 "min_cntlid": 1, 00:12:13.660 "max_cntlid": 65519, 00:12:13.660 "ana_reporting": false 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "nvmf_subsystem_add_host", 00:12:13.660 "params": { 00:12:13.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.660 "host": "nqn.2016-06.io.spdk:host1", 00:12:13.660 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.660 "method": "nvmf_subsystem_add_ns", 00:12:13.660 "params": { 00:12:13.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.660 "namespace": { 00:12:13.660 "nsid": 1, 00:12:13.660 "bdev_name": "malloc0", 00:12:13.660 "nguid": "401BA919516B4F6884D4DD7AA82F5011", 00:12:13.660 "uuid": "401ba919-516b-4f68-84d4-dd7aa82f5011" 00:12:13.660 } 00:12:13.660 } 00:12:13.660 }, 00:12:13.660 { 00:12:13.661 "method": "nvmf_subsystem_add_listener", 00:12:13.661 "params": { 00:12:13.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.661 "listen_address": { 00:12:13.661 "trtype": "TCP", 00:12:13.661 "adrfam": "IPv4", 00:12:13.661 "traddr": "10.0.0.2", 00:12:13.661 "trsvcid": "4420" 00:12:13.661 }, 00:12:13.661 "secure_channel": true 00:12:13.661 } 00:12:13.661 } 00:12:13.661 ] 00:12:13.661 } 00:12:13.661 ] 00:12:13.661 }' 00:12:13.661 18:21:11 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:14.227 18:21:12 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:14.227 "subsystems": [ 00:12:14.227 { 00:12:14.227 "subsystem": "iobuf", 00:12:14.228 "config": [ 00:12:14.228 { 00:12:14.228 "method": "iobuf_set_options", 00:12:14.228 "params": { 00:12:14.228 "small_pool_count": 8192, 00:12:14.228 "large_pool_count": 1024, 00:12:14.228 "small_bufsize": 8192, 00:12:14.228 "large_bufsize": 135168 00:12:14.228 } 00:12:14.228 } 00:12:14.228 ] 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "subsystem": "sock", 00:12:14.228 "config": [ 00:12:14.228 { 00:12:14.228 "method": "sock_impl_set_options", 00:12:14.228 "params": { 00:12:14.228 "impl_name": "uring", 00:12:14.228 "recv_buf_size": 2097152, 00:12:14.228 "send_buf_size": 2097152, 00:12:14.228 "enable_recv_pipe": true, 00:12:14.228 "enable_quickack": false, 00:12:14.228 "enable_placement_id": 0, 00:12:14.228 "enable_zerocopy_send_server": false, 00:12:14.228 "enable_zerocopy_send_client": false, 00:12:14.228 "zerocopy_threshold": 0, 00:12:14.228 "tls_version": 0, 00:12:14.228 "enable_ktls": false 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "sock_impl_set_options", 00:12:14.228 "params": { 00:12:14.228 "impl_name": "posix", 00:12:14.228 "recv_buf_size": 2097152, 00:12:14.228 "send_buf_size": 2097152, 00:12:14.228 "enable_recv_pipe": true, 00:12:14.228 "enable_quickack": false, 00:12:14.228 "enable_placement_id": 0, 00:12:14.228 "enable_zerocopy_send_server": true, 00:12:14.228 "enable_zerocopy_send_client": false, 00:12:14.228 "zerocopy_threshold": 0, 00:12:14.228 "tls_version": 0, 00:12:14.228 "enable_ktls": false 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "sock_impl_set_options", 00:12:14.228 "params": { 00:12:14.228 "impl_name": "ssl", 00:12:14.228 "recv_buf_size": 4096, 00:12:14.228 "send_buf_size": 4096, 00:12:14.228 "enable_recv_pipe": true, 00:12:14.228 "enable_quickack": false, 00:12:14.228 "enable_placement_id": 0, 00:12:14.228 "enable_zerocopy_send_server": true, 00:12:14.228 "enable_zerocopy_send_client": false, 00:12:14.228 "zerocopy_threshold": 0, 00:12:14.228 "tls_version": 0, 00:12:14.228 "enable_ktls": false 00:12:14.228 } 00:12:14.228 } 00:12:14.228 ] 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "subsystem": "vmd", 00:12:14.228 "config": [] 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "subsystem": "accel", 00:12:14.228 "config": [ 00:12:14.228 { 00:12:14.228 "method": "accel_set_options", 00:12:14.228 "params": { 00:12:14.228 "small_cache_size": 128, 00:12:14.228 "large_cache_size": 16, 00:12:14.228 "task_count": 2048, 00:12:14.228 "sequence_count": 2048, 00:12:14.228 "buf_count": 2048 00:12:14.228 } 00:12:14.228 } 00:12:14.228 ] 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "subsystem": "bdev", 00:12:14.228 "config": [ 00:12:14.228 { 00:12:14.228 "method": "bdev_set_options", 00:12:14.228 "params": { 00:12:14.228 "bdev_io_pool_size": 65535, 00:12:14.228 "bdev_io_cache_size": 256, 00:12:14.228 "bdev_auto_examine": true, 00:12:14.228 "iobuf_small_cache_size": 128, 00:12:14.228 "iobuf_large_cache_size": 16 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "bdev_raid_set_options", 00:12:14.228 "params": { 00:12:14.228 "process_window_size_kb": 1024 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "bdev_iscsi_set_options", 00:12:14.228 "params": { 00:12:14.228 "timeout_sec": 30 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "bdev_nvme_set_options", 00:12:14.228 "params": { 00:12:14.228 "action_on_timeout": "none", 00:12:14.228 "timeout_us": 0, 00:12:14.228 "timeout_admin_us": 0, 00:12:14.228 "keep_alive_timeout_ms": 10000, 00:12:14.228 "transport_retry_count": 4, 00:12:14.228 "arbitration_burst": 0, 00:12:14.228 "low_priority_weight": 0, 00:12:14.228 "medium_priority_weight": 0, 00:12:14.228 "high_priority_weight": 0, 00:12:14.228 "nvme_adminq_poll_period_us": 10000, 00:12:14.228 "nvme_ioq_poll_period_us": 0, 00:12:14.228 "io_queue_requests": 512, 00:12:14.228 "delay_cmd_submit": true, 00:12:14.228 "bdev_retry_count": 3, 00:12:14.228 "transport_ack_timeout": 0, 00:12:14.228 "ctrlr_loss_timeout_sec": 0, 00:12:14.228 "reconnect_delay_sec": 0, 00:12:14.228 "fast_io_fail_timeout_sec": 0, 00:12:14.228 "generate_uuids": false, 00:12:14.228 "transport_tos": 0, 00:12:14.228 "io_path_stat": false, 00:12:14.228 "allow_accel_sequence": false 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "bdev_nvme_attach_controller", 00:12:14.228 "params": { 00:12:14.228 "name": "TLSTEST", 00:12:14.228 "trtype": "TCP", 00:12:14.228 "adrfam": "IPv4", 00:12:14.228 "traddr": "10.0.0.2", 00:12:14.228 "trsvcid": "4420", 00:12:14.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.228 "prchk_reftag": false, 00:12:14.228 "prchk_guard": false, 00:12:14.228 "ctrlr_loss_timeout_sec": 0, 00:12:14.228 "reconnect_delay_sec": 0, 00:12:14.228 "fast_io_fail_timeout_sec": 0, 00:12:14.228 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:14.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.228 "hdgst": false, 00:12:14.228 "ddgst": false 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "bdev_nvme_set_hotplug", 00:12:14.228 "params": { 00:12:14.228 "period_us": 100000, 00:12:14.228 "enable": false 00:12:14.228 } 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "method": "bdev_wait_for_examine" 00:12:14.228 } 00:12:14.228 ] 00:12:14.228 }, 00:12:14.228 { 00:12:14.228 "subsystem": "nbd", 00:12:14.228 "config": [] 00:12:14.228 } 00:12:14.228 ] 00:12:14.228 }' 00:12:14.228 18:21:12 -- target/tls.sh@208 -- # killprocess 76969 00:12:14.228 18:21:12 -- common/autotest_common.sh@936 -- # '[' -z 76969 ']' 00:12:14.228 18:21:12 -- common/autotest_common.sh@940 -- # kill -0 76969 00:12:14.228 18:21:12 -- common/autotest_common.sh@941 -- # uname 00:12:14.229 18:21:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:14.229 18:21:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76969 00:12:14.229 18:21:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:14.229 18:21:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:14.229 killing process with pid 76969 00:12:14.229 18:21:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76969' 00:12:14.229 Received shutdown signal, test time was about 10.000000 seconds 00:12:14.229 00:12:14.229 Latency(us) 00:12:14.229 [2024-11-17T18:21:12.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.229 [2024-11-17T18:21:12.496Z] =================================================================================================================== 00:12:14.229 [2024-11-17T18:21:12.496Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:14.229 18:21:12 -- common/autotest_common.sh@955 -- # kill 76969 00:12:14.229 18:21:12 -- common/autotest_common.sh@960 -- # wait 76969 00:12:14.229 18:21:12 -- target/tls.sh@209 -- # killprocess 76922 00:12:14.229 18:21:12 -- common/autotest_common.sh@936 -- # '[' -z 76922 ']' 00:12:14.229 18:21:12 -- common/autotest_common.sh@940 -- # kill -0 76922 00:12:14.229 18:21:12 -- common/autotest_common.sh@941 -- # uname 00:12:14.229 18:21:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:14.229 18:21:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76922 00:12:14.229 18:21:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:14.229 18:21:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:14.229 killing process with pid 76922 00:12:14.229 18:21:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76922' 00:12:14.229 18:21:12 -- common/autotest_common.sh@955 -- # kill 76922 00:12:14.229 18:21:12 -- common/autotest_common.sh@960 -- # wait 76922 00:12:14.488 18:21:12 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:14.488 18:21:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:14.488 18:21:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.488 18:21:12 -- target/tls.sh@212 -- # echo '{ 00:12:14.488 "subsystems": [ 00:12:14.488 { 00:12:14.488 "subsystem": "iobuf", 00:12:14.488 "config": [ 00:12:14.488 { 00:12:14.488 "method": "iobuf_set_options", 00:12:14.488 "params": { 00:12:14.488 "small_pool_count": 8192, 00:12:14.488 "large_pool_count": 1024, 00:12:14.488 "small_bufsize": 8192, 00:12:14.488 "large_bufsize": 135168 00:12:14.488 } 00:12:14.488 } 00:12:14.488 ] 00:12:14.488 }, 00:12:14.488 { 00:12:14.488 "subsystem": "sock", 00:12:14.488 "config": [ 00:12:14.488 { 00:12:14.488 "method": "sock_impl_set_options", 00:12:14.488 "params": { 00:12:14.488 "impl_name": "uring", 00:12:14.488 "recv_buf_size": 2097152, 00:12:14.488 "send_buf_size": 2097152, 00:12:14.488 "enable_recv_pipe": true, 00:12:14.488 "enable_quickack": false, 00:12:14.488 "enable_placement_id": 0, 00:12:14.488 "enable_zerocopy_send_server": false, 00:12:14.488 "enable_zerocopy_send_client": false, 00:12:14.488 "zerocopy_threshold": 0, 00:12:14.488 "tls_version": 0, 00:12:14.488 "enable_ktls": false 00:12:14.488 } 00:12:14.488 }, 00:12:14.488 { 00:12:14.488 "method": "sock_impl_set_options", 00:12:14.488 "params": { 00:12:14.488 "impl_name": "posix", 00:12:14.488 "recv_buf_size": 2097152, 00:12:14.488 "send_buf_size": 2097152, 00:12:14.488 "enable_recv_pipe": true, 00:12:14.488 "enable_quickack": false, 00:12:14.488 "enable_placement_id": 0, 00:12:14.488 "enable_zerocopy_send_server": true, 00:12:14.488 "enable_zerocopy_send_client": false, 00:12:14.488 "zerocopy_threshold": 0, 00:12:14.488 "tls_version": 0, 00:12:14.488 "enable_ktls": false 00:12:14.488 } 00:12:14.488 }, 00:12:14.488 { 00:12:14.488 "method": "sock_impl_set_options", 00:12:14.488 "params": { 00:12:14.488 "impl_name": "ssl", 00:12:14.489 "recv_buf_size": 4096, 00:12:14.489 "send_buf_size": 4096, 00:12:14.489 "enable_recv_pipe": true, 00:12:14.489 "enable_quickack": false, 00:12:14.489 "enable_placement_id": 0, 00:12:14.489 "enable_zerocopy_send_server": true, 00:12:14.489 "enable_zerocopy_send_client": false, 00:12:14.489 "zerocopy_threshold": 0, 00:12:14.489 "tls_version": 0, 00:12:14.489 "enable_ktls": false 00:12:14.489 } 00:12:14.489 } 00:12:14.489 ] 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "subsystem": "vmd", 00:12:14.489 "config": [] 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "subsystem": "accel", 00:12:14.489 "config": [ 00:12:14.489 { 00:12:14.489 "method": "accel_set_options", 00:12:14.489 "params": { 00:12:14.489 "small_cache_size": 128, 00:12:14.489 "large_cache_size": 16, 00:12:14.489 "task_count": 2048, 00:12:14.489 "sequence_count": 2048, 00:12:14.489 "buf_count": 2048 00:12:14.489 } 00:12:14.489 } 00:12:14.489 ] 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "subsystem": "bdev", 00:12:14.489 "config": [ 00:12:14.489 { 00:12:14.489 "method": "bdev_set_options", 00:12:14.489 "params": { 00:12:14.489 "bdev_io_pool_size": 65535, 00:12:14.489 "bdev_io_cache_size": 256, 00:12:14.489 "bdev_auto_examine": true, 00:12:14.489 "iobuf_small_cache_size": 128, 00:12:14.489 "iobuf_large_cache_size": 16 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "bdev_raid_set_options", 00:12:14.489 "params": { 00:12:14.489 "process_window_size_kb": 1024 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "bdev_iscsi_set_options", 00:12:14.489 "params": { 00:12:14.489 "timeout_sec": 30 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "bdev_nvme_set_options", 00:12:14.489 "params": { 00:12:14.489 "action_on_timeout": "none", 00:12:14.489 "timeout_us": 0, 00:12:14.489 "timeout_admin_us": 0, 00:12:14.489 "keep_alive_timeout_ms": 10000, 00:12:14.489 "transport_retry_count": 4, 00:12:14.489 "arbitration_burst": 0, 00:12:14.489 "low_priority_weight": 0, 00:12:14.489 "medium_priority_weight": 0, 00:12:14.489 "high_priority_weight": 0, 00:12:14.489 "nvme_adminq_poll_period_us": 10000, 00:12:14.489 "nvme_ioq_poll_period_us": 0, 00:12:14.489 "io_queue_requests": 0, 00:12:14.489 "delay_cmd_submit": true, 00:12:14.489 "bdev_retry_count": 3, 00:12:14.489 "transport_ack_timeout": 0, 00:12:14.489 "ctrlr_loss_timeout_sec": 0, 00:12:14.489 "reconnect_delay_sec": 0, 00:12:14.489 "fast_io_fail_timeout_sec": 0, 00:12:14.489 "generate_uuids": false, 00:12:14.489 "transport_tos": 0, 00:12:14.489 "io_path_stat": false, 00:12:14.489 "allow_accel_sequence": false 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "bdev_nvme_set_hotplug", 00:12:14.489 "params": { 00:12:14.489 "period_us": 100000, 00:12:14.489 "enable": false 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "bdev_malloc_create", 00:12:14.489 "params": { 00:12:14.489 "name": "malloc0", 00:12:14.489 "num_blocks": 8192, 00:12:14.489 "block_size": 4096, 00:12:14.489 "physical_block_size": 4096, 00:12:14.489 "uuid": "401ba919-516b-4f68-84d4-dd7aa82f5011", 00:12:14.489 "optimal_io_boundary": 0 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "bdev_wait_for_examine" 00:12:14.489 } 00:12:14.489 ] 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "subsystem": "nbd", 00:12:14.489 "config": [] 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "subsystem": "scheduler", 00:12:14.489 "config": [ 00:12:14.489 { 00:12:14.489 "method": "framework_set_scheduler", 00:12:14.489 "params": { 00:12:14.489 "name": "static" 00:12:14.489 } 00:12:14.489 } 00:12:14.489 ] 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "subsystem": "nvmf", 00:12:14.489 "config": [ 00:12:14.489 { 00:12:14.489 "method": "nvmf_set_config", 00:12:14.489 "params": { 00:12:14.489 "discovery_filter": "match_any", 00:12:14.489 "admin_cmd_passthru": { 00:12:14.489 "identify_ctrlr": false 00:12:14.489 } 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_set_max_subsystems", 00:12:14.489 "params": { 00:12:14.489 "max_subsystems": 1024 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_set_crdt", 00:12:14.489 "params": { 00:12:14.489 "crdt1": 0, 00:12:14.489 "crdt2": 0, 00:12:14.489 "crdt3": 0 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_create_transport", 00:12:14.489 "params": { 00:12:14.489 "trtype": "TCP", 00:12:14.489 "max_queue_depth": 128, 00:12:14.489 "max_io_qpairs_per_ctrlr": 127, 00:12:14.489 "in_capsule_data_size": 4096, 00:12:14.489 "max_io_size": 131072, 00:12:14.489 "io_unit_size": 131072, 00:12:14.489 "max_aq_depth": 128, 00:12:14.489 "num_shared_buffers": 511, 00:12:14.489 "buf_cache_size": 4294967295, 00:12:14.489 "dif_insert_or_strip": false, 00:12:14.489 "zcopy": false, 00:12:14.489 "c2h_success": false, 00:12:14.489 "sock_priority": 0, 00:12:14.489 "abort_timeout_sec": 1 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_create_subsystem", 00:12:14.489 "params": { 00:12:14.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.489 "allow_any_host": false, 00:12:14.489 "serial_number": "SPDK00000000000001", 00:12:14.489 "model_number": "SPDK bdev Controller", 00:12:14.489 "max_namespaces": 10, 00:12:14.489 "min_cntlid": 1, 00:12:14.489 "max_cntlid": 65519, 00:12:14.489 "ana_reporting": false 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_subsystem_add_host", 00:12:14.489 "params": { 00:12:14.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.489 "host": "nqn.2016-06.io.spdk:host1", 00:12:14.489 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_subsystem_add_ns", 00:12:14.489 "params": { 00:12:14.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.489 "namespace": { 00:12:14.489 "nsid": 1, 00:12:14.489 "bdev_name": "malloc0", 00:12:14.489 "nguid": "401BA919516B4F6884D4DD7AA82F5011", 00:12:14.489 "uuid": "401ba919-516b-4f68-84d4-dd7aa82f5011" 00:12:14.489 } 00:12:14.489 } 00:12:14.489 }, 00:12:14.489 { 00:12:14.489 "method": "nvmf_subsystem_add_listener", 00:12:14.490 "params": { 00:12:14.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.490 "listen_address": { 00:12:14.490 "trtype": "TCP", 00:12:14.490 "adrfam": "IPv4", 00:12:14.490 "traddr": "10.0.0.2", 00:12:14.490 "trsvcid": "4420" 00:12:14.490 }, 00:12:14.490 "secure_channel": true 00:12:14.490 } 00:12:14.490 } 00:12:14.490 ] 00:12:14.490 } 00:12:14.490 ] 00:12:14.490 }' 00:12:14.490 18:21:12 -- common/autotest_common.sh@10 -- # set +x 00:12:14.490 18:21:12 -- nvmf/common.sh@469 -- # nvmfpid=77007 00:12:14.490 18:21:12 -- nvmf/common.sh@470 -- # waitforlisten 77007 00:12:14.490 18:21:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:14.490 18:21:12 -- common/autotest_common.sh@829 -- # '[' -z 77007 ']' 00:12:14.490 18:21:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.490 18:21:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.490 18:21:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.490 18:21:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.490 18:21:12 -- common/autotest_common.sh@10 -- # set +x 00:12:14.490 [2024-11-17 18:21:12.608916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:14.490 [2024-11-17 18:21:12.609015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.490 [2024-11-17 18:21:12.748729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.749 [2024-11-17 18:21:12.781677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.749 [2024-11-17 18:21:12.781825] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.749 [2024-11-17 18:21:12.781839] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.749 [2024-11-17 18:21:12.781848] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.749 [2024-11-17 18:21:12.781879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.749 [2024-11-17 18:21:12.960992] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.749 [2024-11-17 18:21:12.992942] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:14.749 [2024-11-17 18:21:12.993159] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.316 18:21:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.316 18:21:13 -- common/autotest_common.sh@862 -- # return 0 00:12:15.316 18:21:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:15.316 18:21:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.316 18:21:13 -- common/autotest_common.sh@10 -- # set +x 00:12:15.575 18:21:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.575 18:21:13 -- target/tls.sh@216 -- # bdevperf_pid=77039 00:12:15.575 18:21:13 -- target/tls.sh@217 -- # waitforlisten 77039 /var/tmp/bdevperf.sock 00:12:15.575 18:21:13 -- common/autotest_common.sh@829 -- # '[' -z 77039 ']' 00:12:15.575 18:21:13 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:15.575 18:21:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.575 18:21:13 -- target/tls.sh@213 -- # echo '{ 00:12:15.575 "subsystems": [ 00:12:15.575 { 00:12:15.575 "subsystem": "iobuf", 00:12:15.575 "config": [ 00:12:15.575 { 00:12:15.575 "method": "iobuf_set_options", 00:12:15.575 "params": { 00:12:15.575 "small_pool_count": 8192, 00:12:15.575 "large_pool_count": 1024, 00:12:15.575 "small_bufsize": 8192, 00:12:15.575 "large_bufsize": 135168 00:12:15.575 } 00:12:15.575 } 00:12:15.575 ] 00:12:15.575 }, 00:12:15.575 { 00:12:15.575 "subsystem": "sock", 00:12:15.575 "config": [ 00:12:15.575 { 00:12:15.575 "method": "sock_impl_set_options", 00:12:15.575 "params": { 00:12:15.575 "impl_name": "uring", 00:12:15.575 "recv_buf_size": 2097152, 00:12:15.575 "send_buf_size": 2097152, 00:12:15.575 "enable_recv_pipe": true, 00:12:15.576 "enable_quickack": false, 00:12:15.576 "enable_placement_id": 0, 00:12:15.576 "enable_zerocopy_send_server": false, 00:12:15.576 "enable_zerocopy_send_client": false, 00:12:15.576 "zerocopy_threshold": 0, 00:12:15.576 "tls_version": 0, 00:12:15.576 "enable_ktls": false 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "sock_impl_set_options", 00:12:15.576 "params": { 00:12:15.576 "impl_name": "posix", 00:12:15.576 "recv_buf_size": 2097152, 00:12:15.576 "send_buf_size": 2097152, 00:12:15.576 "enable_recv_pipe": true, 00:12:15.576 "enable_quickack": false, 00:12:15.576 "enable_placement_id": 0, 00:12:15.576 "enable_zerocopy_send_server": true, 00:12:15.576 "enable_zerocopy_send_client": false, 00:12:15.576 "zerocopy_threshold": 0, 00:12:15.576 "tls_version": 0, 00:12:15.576 "enable_ktls": false 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "sock_impl_set_options", 00:12:15.576 "params": { 00:12:15.576 "impl_name": "ssl", 00:12:15.576 "recv_buf_size": 4096, 00:12:15.576 "send_buf_size": 4096, 00:12:15.576 "enable_recv_pipe": true, 00:12:15.576 "enable_quickack": false, 00:12:15.576 "enable_placement_id": 0, 00:12:15.576 "enable_zerocopy_send_server": true, 00:12:15.576 "enable_zerocopy_send_client": false, 00:12:15.576 "zerocopy_threshold": 0, 00:12:15.576 "tls_version": 0, 00:12:15.576 "enable_ktls": false 00:12:15.576 } 00:12:15.576 } 00:12:15.576 ] 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "subsystem": "vmd", 00:12:15.576 "config": [] 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "subsystem": "accel", 00:12:15.576 "config": [ 00:12:15.576 { 00:12:15.576 "method": "accel_set_options", 00:12:15.576 "params": { 00:12:15.576 "small_cache_size": 128, 00:12:15.576 "large_cache_size": 16, 00:12:15.576 "task_count": 2048, 00:12:15.576 "sequence_count": 2048, 00:12:15.576 "buf_count": 2048 00:12:15.576 } 00:12:15.576 } 00:12:15.576 ] 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "subsystem": "bdev", 00:12:15.576 "config": [ 00:12:15.576 { 00:12:15.576 "method": "bdev_set_options", 00:12:15.576 "params": { 00:12:15.576 "bdev_io_pool_size": 65535, 00:12:15.576 "bdev_io_cache_size": 256, 00:12:15.576 "bdev_auto_examine": true, 00:12:15.576 "iobuf_small_cache_size": 128, 00:12:15.576 "iobuf_large_cache_size": 16 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "bdev_raid_set_options", 00:12:15.576 "params": { 00:12:15.576 "process_window_size_kb": 1024 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "bdev_iscsi_set_options", 00:12:15.576 "params": { 00:12:15.576 "timeout_sec": 30 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "bdev_nvme_set_options", 00:12:15.576 "params": { 00:12:15.576 "action_on_timeout": "none", 00:12:15.576 "timeout_us": 0, 00:12:15.576 "timeout_admin_us": 0, 00:12:15.576 "keep_alive_timeout_ms": 10000, 00:12:15.576 "transport_retry_count": 4, 00:12:15.576 "arbitration_burst": 0, 00:12:15.576 "low_priority_weight": 0, 00:12:15.576 "medium_priority_weight": 0, 00:12:15.576 "high_priority_weight": 0, 00:12:15.576 "nvme_adminq_poll_period_us": 10000, 00:12:15.576 "nvme_ioq_poll_period_us": 0, 00:12:15.576 "io_queue_requests": 512, 00:12:15.576 "delay_cmd_submit": true, 00:12:15.576 "bdev_retry_count": 3, 00:12:15.576 "transport_ack_timeout": 0, 00:12:15.576 "ctrlr_loss_timeout_sec": 0, 00:12:15.576 "reconnect_delay_sec": 0, 00:12:15.576 "fast_io_fail_timeout_sec": 0, 00:12:15.576 "generate_uuids": false, 00:12:15.576 "transport_tos": 0, 00:12:15.576 "io_path_stat": false, 00:12:15.576 "allow_accel_sequence": false 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "bdev_nvme_attach_controller", 00:12:15.576 "params": { 00:12:15.576 "name": "TLSTEST", 00:12:15.576 "trtype": "TCP", 00:12:15.576 "adrfam": "IPv4", 00:12:15.576 "traddr": "10.0.0.2", 00:12:15.576 "trsvcid": "4420", 00:12:15.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.576 "prchk_reftag": false, 00:12:15.576 "prchk_guard": false, 00:12:15.576 "ctrlr_loss_timeout_sec": 0, 00:12:15.576 "reconnect_delay_sec": 0, 00:12:15.576 "fast_io_fail_timeout_sec": 0, 00:12:15.576 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:15.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.576 "hdgst": false, 00:12:15.576 "ddgst": false 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "bdev_nvme_set_hotplug", 00:12:15.576 "params": { 00:12:15.576 "period_us": 100000, 00:12:15.576 "enable": false 00:12:15.576 } 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "method": "bdev_wait_for_examine" 00:12:15.576 } 00:12:15.576 ] 00:12:15.576 }, 00:12:15.576 { 00:12:15.576 "subsystem": "nbd", 00:12:15.576 "config": [] 00:12:15.576 } 00:12:15.576 ] 00:12:15.576 }' 00:12:15.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.576 18:21:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.576 18:21:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.576 18:21:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.576 18:21:13 -- common/autotest_common.sh@10 -- # set +x 00:12:15.576 [2024-11-17 18:21:13.641105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:15.576 [2024-11-17 18:21:13.641207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77039 ] 00:12:15.576 [2024-11-17 18:21:13.779760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.576 [2024-11-17 18:21:13.817624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.835 [2024-11-17 18:21:13.941949] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:16.403 18:21:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.403 18:21:14 -- common/autotest_common.sh@862 -- # return 0 00:12:16.403 18:21:14 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:16.662 Running I/O for 10 seconds... 00:12:26.635 00:12:26.635 Latency(us) 00:12:26.635 [2024-11-17T18:21:24.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.635 [2024-11-17T18:21:24.902Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:26.635 Verification LBA range: start 0x0 length 0x2000 00:12:26.635 TLSTESTn1 : 10.01 5659.44 22.11 0.00 0.00 22582.03 4051.32 20852.36 00:12:26.635 [2024-11-17T18:21:24.902Z] =================================================================================================================== 00:12:26.635 [2024-11-17T18:21:24.902Z] Total : 5659.44 22.11 0.00 0.00 22582.03 4051.32 20852.36 00:12:26.635 0 00:12:26.635 18:21:24 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:26.635 18:21:24 -- target/tls.sh@223 -- # killprocess 77039 00:12:26.635 18:21:24 -- common/autotest_common.sh@936 -- # '[' -z 77039 ']' 00:12:26.635 18:21:24 -- common/autotest_common.sh@940 -- # kill -0 77039 00:12:26.635 18:21:24 -- common/autotest_common.sh@941 -- # uname 00:12:26.635 18:21:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.635 18:21:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77039 00:12:26.635 18:21:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:26.635 18:21:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:26.635 18:21:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77039' 00:12:26.635 killing process with pid 77039 00:12:26.635 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.635 00:12:26.635 Latency(us) 00:12:26.635 [2024-11-17T18:21:24.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.635 [2024-11-17T18:21:24.902Z] =================================================================================================================== 00:12:26.635 [2024-11-17T18:21:24.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.635 18:21:24 -- common/autotest_common.sh@955 -- # kill 77039 00:12:26.635 18:21:24 -- common/autotest_common.sh@960 -- # wait 77039 00:12:26.894 18:21:24 -- target/tls.sh@224 -- # killprocess 77007 00:12:26.894 18:21:24 -- common/autotest_common.sh@936 -- # '[' -z 77007 ']' 00:12:26.894 18:21:24 -- common/autotest_common.sh@940 -- # kill -0 77007 00:12:26.894 18:21:24 -- common/autotest_common.sh@941 -- # uname 00:12:26.894 18:21:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.894 18:21:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77007 00:12:26.894 killing process with pid 77007 00:12:26.894 18:21:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:26.894 18:21:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:26.894 18:21:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77007' 00:12:26.894 18:21:25 -- common/autotest_common.sh@955 -- # kill 77007 00:12:26.894 18:21:25 -- common/autotest_common.sh@960 -- # wait 77007 00:12:26.894 18:21:25 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:26.894 18:21:25 -- target/tls.sh@227 -- # cleanup 00:12:26.894 18:21:25 -- target/tls.sh@15 -- # process_shm --id 0 00:12:26.894 18:21:25 -- common/autotest_common.sh@806 -- # type=--id 00:12:26.894 18:21:25 -- common/autotest_common.sh@807 -- # id=0 00:12:26.894 18:21:25 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:26.894 18:21:25 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:26.894 18:21:25 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:26.894 18:21:25 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:26.894 18:21:25 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:26.894 18:21:25 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:26.894 nvmf_trace.0 00:12:27.153 18:21:25 -- common/autotest_common.sh@821 -- # return 0 00:12:27.153 18:21:25 -- target/tls.sh@16 -- # killprocess 77039 00:12:27.153 18:21:25 -- common/autotest_common.sh@936 -- # '[' -z 77039 ']' 00:12:27.153 Process with pid 77039 is not found 00:12:27.153 18:21:25 -- common/autotest_common.sh@940 -- # kill -0 77039 00:12:27.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77039) - No such process 00:12:27.153 18:21:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77039 is not found' 00:12:27.153 18:21:25 -- target/tls.sh@17 -- # nvmftestfini 00:12:27.153 18:21:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:27.153 18:21:25 -- nvmf/common.sh@116 -- # sync 00:12:27.153 18:21:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:27.153 18:21:25 -- nvmf/common.sh@119 -- # set +e 00:12:27.153 18:21:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:27.153 18:21:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:27.153 rmmod nvme_tcp 00:12:27.153 rmmod nvme_fabrics 00:12:27.153 rmmod nvme_keyring 00:12:27.153 18:21:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:27.153 18:21:25 -- nvmf/common.sh@123 -- # set -e 00:12:27.153 18:21:25 -- nvmf/common.sh@124 -- # return 0 00:12:27.153 18:21:25 -- nvmf/common.sh@477 -- # '[' -n 77007 ']' 00:12:27.153 18:21:25 -- nvmf/common.sh@478 -- # killprocess 77007 00:12:27.153 18:21:25 -- common/autotest_common.sh@936 -- # '[' -z 77007 ']' 00:12:27.153 Process with pid 77007 is not found 00:12:27.153 18:21:25 -- common/autotest_common.sh@940 -- # kill -0 77007 00:12:27.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77007) - No such process 00:12:27.153 18:21:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77007 is not found' 00:12:27.153 18:21:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:27.153 18:21:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:27.153 18:21:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:27.153 18:21:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.153 18:21:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:27.153 18:21:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.153 18:21:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.153 18:21:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.153 18:21:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:27.153 18:21:25 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:27.153 00:12:27.153 real 1m6.047s 00:12:27.153 user 1m41.701s 00:12:27.153 sys 0m23.785s 00:12:27.153 18:21:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:27.153 ************************************ 00:12:27.153 18:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.153 END TEST nvmf_tls 00:12:27.153 ************************************ 00:12:27.153 18:21:25 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:27.153 18:21:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:27.153 18:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.153 18:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.153 ************************************ 00:12:27.153 START TEST nvmf_fips 00:12:27.153 ************************************ 00:12:27.153 18:21:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:27.419 * Looking for test storage... 00:12:27.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:27.419 18:21:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:27.419 18:21:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:27.419 18:21:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:27.419 18:21:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:27.419 18:21:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:27.419 18:21:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:27.419 18:21:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:27.419 18:21:25 -- scripts/common.sh@335 -- # IFS=.-: 00:12:27.419 18:21:25 -- scripts/common.sh@335 -- # read -ra ver1 00:12:27.419 18:21:25 -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.419 18:21:25 -- scripts/common.sh@336 -- # read -ra ver2 00:12:27.419 18:21:25 -- scripts/common.sh@337 -- # local 'op=<' 00:12:27.419 18:21:25 -- scripts/common.sh@339 -- # ver1_l=2 00:12:27.419 18:21:25 -- scripts/common.sh@340 -- # ver2_l=1 00:12:27.419 18:21:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:27.419 18:21:25 -- scripts/common.sh@343 -- # case "$op" in 00:12:27.419 18:21:25 -- scripts/common.sh@344 -- # : 1 00:12:27.419 18:21:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:27.419 18:21:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.419 18:21:25 -- scripts/common.sh@364 -- # decimal 1 00:12:27.419 18:21:25 -- scripts/common.sh@352 -- # local d=1 00:12:27.420 18:21:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.420 18:21:25 -- scripts/common.sh@354 -- # echo 1 00:12:27.420 18:21:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:27.420 18:21:25 -- scripts/common.sh@365 -- # decimal 2 00:12:27.420 18:21:25 -- scripts/common.sh@352 -- # local d=2 00:12:27.420 18:21:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.420 18:21:25 -- scripts/common.sh@354 -- # echo 2 00:12:27.420 18:21:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:27.420 18:21:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:27.420 18:21:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:27.420 18:21:25 -- scripts/common.sh@367 -- # return 0 00:12:27.420 18:21:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.420 18:21:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 18:21:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 18:21:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 18:21:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 18:21:25 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.420 18:21:25 -- nvmf/common.sh@7 -- # uname -s 00:12:27.420 18:21:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.420 18:21:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.420 18:21:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.420 18:21:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.420 18:21:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.420 18:21:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.420 18:21:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.420 18:21:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.420 18:21:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.420 18:21:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.420 18:21:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:12:27.420 18:21:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:12:27.420 18:21:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.420 18:21:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.420 18:21:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.420 18:21:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.420 18:21:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.420 18:21:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.420 18:21:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.420 18:21:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.420 18:21:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.420 18:21:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.420 18:21:25 -- paths/export.sh@5 -- # export PATH 00:12:27.420 18:21:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.420 18:21:25 -- nvmf/common.sh@46 -- # : 0 00:12:27.420 18:21:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:27.420 18:21:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:27.420 18:21:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:27.420 18:21:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.420 18:21:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.420 18:21:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:27.420 18:21:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:27.420 18:21:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:27.420 18:21:25 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:27.420 18:21:25 -- fips/fips.sh@89 -- # check_openssl_version 00:12:27.420 18:21:25 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:27.420 18:21:25 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:27.420 18:21:25 -- fips/fips.sh@85 -- # openssl version 00:12:27.420 18:21:25 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:12:27.420 18:21:25 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:27.420 18:21:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:27.420 18:21:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:27.420 18:21:25 -- scripts/common.sh@335 -- # IFS=.-: 00:12:27.420 18:21:25 -- scripts/common.sh@335 -- # read -ra ver1 00:12:27.420 18:21:25 -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.420 18:21:25 -- scripts/common.sh@336 -- # read -ra ver2 00:12:27.420 18:21:25 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:27.420 18:21:25 -- scripts/common.sh@339 -- # ver1_l=3 00:12:27.420 18:21:25 -- scripts/common.sh@340 -- # ver2_l=3 00:12:27.420 18:21:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:27.420 18:21:25 -- scripts/common.sh@343 -- # case "$op" in 00:12:27.420 18:21:25 -- scripts/common.sh@347 -- # : 1 00:12:27.420 18:21:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:27.420 18:21:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.420 18:21:25 -- scripts/common.sh@364 -- # decimal 3 00:12:27.420 18:21:25 -- scripts/common.sh@352 -- # local d=3 00:12:27.420 18:21:25 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:27.420 18:21:25 -- scripts/common.sh@354 -- # echo 3 00:12:27.420 18:21:25 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:27.420 18:21:25 -- scripts/common.sh@365 -- # decimal 3 00:12:27.420 18:21:25 -- scripts/common.sh@352 -- # local d=3 00:12:27.420 18:21:25 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:27.420 18:21:25 -- scripts/common.sh@354 -- # echo 3 00:12:27.420 18:21:25 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:27.420 18:21:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:27.420 18:21:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:27.420 18:21:25 -- scripts/common.sh@363 -- # (( v++ )) 00:12:27.420 18:21:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.420 18:21:25 -- scripts/common.sh@364 -- # decimal 1 00:12:27.420 18:21:25 -- scripts/common.sh@352 -- # local d=1 00:12:27.420 18:21:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.420 18:21:25 -- scripts/common.sh@354 -- # echo 1 00:12:27.420 18:21:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:27.420 18:21:25 -- scripts/common.sh@365 -- # decimal 0 00:12:27.420 18:21:25 -- scripts/common.sh@352 -- # local d=0 00:12:27.420 18:21:25 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:27.420 18:21:25 -- scripts/common.sh@354 -- # echo 0 00:12:27.420 18:21:25 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:27.420 18:21:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:27.420 18:21:25 -- scripts/common.sh@366 -- # return 0 00:12:27.420 18:21:25 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:27.420 18:21:25 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:27.420 18:21:25 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:27.420 18:21:25 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:27.420 18:21:25 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:27.420 18:21:25 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:27.420 18:21:25 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:27.420 18:21:25 -- fips/fips.sh@113 -- # build_openssl_config 00:12:27.420 18:21:25 -- fips/fips.sh@37 -- # cat 00:12:27.420 18:21:25 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:27.420 18:21:25 -- fips/fips.sh@58 -- # cat - 00:12:27.420 18:21:25 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:27.420 18:21:25 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:27.420 18:21:25 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:27.420 18:21:25 -- fips/fips.sh@116 -- # openssl list -providers 00:12:27.420 18:21:25 -- fips/fips.sh@116 -- # grep name 00:12:27.744 18:21:25 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:27.744 18:21:25 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:27.744 18:21:25 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:27.744 18:21:25 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:27.744 18:21:25 -- fips/fips.sh@127 -- # : 00:12:27.744 18:21:25 -- common/autotest_common.sh@650 -- # local es=0 00:12:27.744 18:21:25 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:27.744 18:21:25 -- common/autotest_common.sh@638 -- # local arg=openssl 00:12:27.744 18:21:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.744 18:21:25 -- common/autotest_common.sh@642 -- # type -t openssl 00:12:27.744 18:21:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.744 18:21:25 -- common/autotest_common.sh@644 -- # type -P openssl 00:12:27.744 18:21:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.744 18:21:25 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:12:27.744 18:21:25 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:12:27.744 18:21:25 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:12:27.744 Error setting digest 00:12:27.744 40824E04127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:27.744 40824E04127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:27.744 18:21:25 -- common/autotest_common.sh@653 -- # es=1 00:12:27.744 18:21:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:27.744 18:21:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:27.744 18:21:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:27.744 18:21:25 -- fips/fips.sh@130 -- # nvmftestinit 00:12:27.744 18:21:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:27.744 18:21:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.744 18:21:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:27.744 18:21:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:27.744 18:21:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:27.744 18:21:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.744 18:21:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.744 18:21:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.744 18:21:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:27.744 18:21:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:27.744 18:21:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:27.744 18:21:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:27.744 18:21:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:27.744 18:21:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:27.744 18:21:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.744 18:21:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.744 18:21:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:27.744 18:21:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:27.745 18:21:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.745 18:21:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.745 18:21:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.745 18:21:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.745 18:21:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.745 18:21:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.745 18:21:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.745 18:21:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.745 18:21:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:27.745 18:21:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:27.745 Cannot find device "nvmf_tgt_br" 00:12:27.745 18:21:25 -- nvmf/common.sh@154 -- # true 00:12:27.745 18:21:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.745 Cannot find device "nvmf_tgt_br2" 00:12:27.745 18:21:25 -- nvmf/common.sh@155 -- # true 00:12:27.745 18:21:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:27.745 18:21:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:27.745 Cannot find device "nvmf_tgt_br" 00:12:27.745 18:21:25 -- nvmf/common.sh@157 -- # true 00:12:27.745 18:21:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:27.745 Cannot find device "nvmf_tgt_br2" 00:12:27.745 18:21:25 -- nvmf/common.sh@158 -- # true 00:12:27.745 18:21:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:27.745 18:21:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:27.745 18:21:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.745 18:21:25 -- nvmf/common.sh@161 -- # true 00:12:27.745 18:21:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.745 18:21:25 -- nvmf/common.sh@162 -- # true 00:12:27.745 18:21:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.745 18:21:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.745 18:21:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.745 18:21:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.745 18:21:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.745 18:21:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.745 18:21:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.745 18:21:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:27.745 18:21:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:27.745 18:21:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:27.745 18:21:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:27.745 18:21:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:27.745 18:21:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:27.745 18:21:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:28.027 18:21:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:28.027 18:21:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:28.027 18:21:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:28.027 18:21:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:28.027 18:21:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:28.027 18:21:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:28.027 18:21:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:28.027 18:21:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:28.027 18:21:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:28.027 18:21:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:28.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:28.027 00:12:28.027 --- 10.0.0.2 ping statistics --- 00:12:28.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.027 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:28.027 18:21:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:28.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:28.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:28.027 00:12:28.027 --- 10.0.0.3 ping statistics --- 00:12:28.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.027 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:28.027 18:21:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:28.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:28.027 00:12:28.027 --- 10.0.0.1 ping statistics --- 00:12:28.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.027 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:28.027 18:21:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.027 18:21:26 -- nvmf/common.sh@421 -- # return 0 00:12:28.027 18:21:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:28.027 18:21:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.027 18:21:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:28.027 18:21:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:28.027 18:21:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.027 18:21:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:28.027 18:21:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:28.027 18:21:26 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:28.027 18:21:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:28.027 18:21:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.027 18:21:26 -- common/autotest_common.sh@10 -- # set +x 00:12:28.027 18:21:26 -- nvmf/common.sh@469 -- # nvmfpid=77396 00:12:28.027 18:21:26 -- nvmf/common.sh@470 -- # waitforlisten 77396 00:12:28.027 18:21:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:28.027 18:21:26 -- common/autotest_common.sh@829 -- # '[' -z 77396 ']' 00:12:28.027 18:21:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.027 18:21:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.028 18:21:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.028 18:21:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.028 18:21:26 -- common/autotest_common.sh@10 -- # set +x 00:12:28.028 [2024-11-17 18:21:26.165984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:28.028 [2024-11-17 18:21:26.166068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.287 [2024-11-17 18:21:26.298217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.287 [2024-11-17 18:21:26.336378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.287 [2024-11-17 18:21:26.336562] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.287 [2024-11-17 18:21:26.336579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.287 [2024-11-17 18:21:26.336589] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.287 [2024-11-17 18:21:26.336628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.225 18:21:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.225 18:21:27 -- common/autotest_common.sh@862 -- # return 0 00:12:29.225 18:21:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:29.225 18:21:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:29.225 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:29.225 18:21:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.225 18:21:27 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:29.225 18:21:27 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:29.225 18:21:27 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:29.225 18:21:27 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:29.225 18:21:27 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:29.225 18:21:27 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:29.225 18:21:27 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:29.225 18:21:27 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.225 [2024-11-17 18:21:27.401251] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.225 [2024-11-17 18:21:27.417194] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:29.225 [2024-11-17 18:21:27.417419] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.225 malloc0 00:12:29.225 18:21:27 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:29.225 18:21:27 -- fips/fips.sh@147 -- # bdevperf_pid=77430 00:12:29.225 18:21:27 -- fips/fips.sh@148 -- # waitforlisten 77430 /var/tmp/bdevperf.sock 00:12:29.225 18:21:27 -- common/autotest_common.sh@829 -- # '[' -z 77430 ']' 00:12:29.225 18:21:27 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:29.225 18:21:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.225 18:21:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.225 18:21:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.225 18:21:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.225 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:29.484 [2024-11-17 18:21:27.546026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:29.484 [2024-11-17 18:21:27.546152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77430 ] 00:12:29.484 [2024-11-17 18:21:27.693842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.484 [2024-11-17 18:21:27.738953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.421 18:21:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.421 18:21:28 -- common/autotest_common.sh@862 -- # return 0 00:12:30.421 18:21:28 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:30.421 [2024-11-17 18:21:28.675085] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:30.680 TLSTESTn1 00:12:30.680 18:21:28 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:30.680 Running I/O for 10 seconds... 00:12:40.665 00:12:40.665 Latency(us) 00:12:40.665 [2024-11-17T18:21:38.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.665 [2024-11-17T18:21:38.932Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:40.665 Verification LBA range: start 0x0 length 0x2000 00:12:40.665 TLSTESTn1 : 10.01 6805.97 26.59 0.00 0.00 18779.87 2308.65 21924.77 00:12:40.665 [2024-11-17T18:21:38.932Z] =================================================================================================================== 00:12:40.665 [2024-11-17T18:21:38.932Z] Total : 6805.97 26.59 0.00 0.00 18779.87 2308.65 21924.77 00:12:40.665 0 00:12:40.665 18:21:38 -- fips/fips.sh@1 -- # cleanup 00:12:40.665 18:21:38 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:40.665 18:21:38 -- common/autotest_common.sh@806 -- # type=--id 00:12:40.665 18:21:38 -- common/autotest_common.sh@807 -- # id=0 00:12:40.665 18:21:38 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:40.665 18:21:38 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:40.665 18:21:38 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:40.665 18:21:38 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:40.665 18:21:38 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:40.665 18:21:38 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:40.925 nvmf_trace.0 00:12:40.925 18:21:38 -- common/autotest_common.sh@821 -- # return 0 00:12:40.925 18:21:38 -- fips/fips.sh@16 -- # killprocess 77430 00:12:40.925 18:21:38 -- common/autotest_common.sh@936 -- # '[' -z 77430 ']' 00:12:40.925 18:21:38 -- common/autotest_common.sh@940 -- # kill -0 77430 00:12:40.925 18:21:38 -- common/autotest_common.sh@941 -- # uname 00:12:40.925 18:21:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.925 18:21:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77430 00:12:40.925 18:21:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:40.925 killing process with pid 77430 00:12:40.925 18:21:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:40.925 18:21:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77430' 00:12:40.925 18:21:39 -- common/autotest_common.sh@955 -- # kill 77430 00:12:40.925 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.925 00:12:40.925 Latency(us) 00:12:40.925 [2024-11-17T18:21:39.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.925 [2024-11-17T18:21:39.192Z] =================================================================================================================== 00:12:40.925 [2024-11-17T18:21:39.192Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.925 18:21:39 -- common/autotest_common.sh@960 -- # wait 77430 00:12:40.925 18:21:39 -- fips/fips.sh@17 -- # nvmftestfini 00:12:40.925 18:21:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:40.925 18:21:39 -- nvmf/common.sh@116 -- # sync 00:12:41.184 18:21:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:41.184 18:21:39 -- nvmf/common.sh@119 -- # set +e 00:12:41.184 18:21:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:41.184 18:21:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.184 rmmod nvme_tcp 00:12:41.184 rmmod nvme_fabrics 00:12:41.184 rmmod nvme_keyring 00:12:41.184 18:21:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.184 18:21:39 -- nvmf/common.sh@123 -- # set -e 00:12:41.184 18:21:39 -- nvmf/common.sh@124 -- # return 0 00:12:41.184 18:21:39 -- nvmf/common.sh@477 -- # '[' -n 77396 ']' 00:12:41.184 18:21:39 -- nvmf/common.sh@478 -- # killprocess 77396 00:12:41.184 18:21:39 -- common/autotest_common.sh@936 -- # '[' -z 77396 ']' 00:12:41.184 18:21:39 -- common/autotest_common.sh@940 -- # kill -0 77396 00:12:41.184 18:21:39 -- common/autotest_common.sh@941 -- # uname 00:12:41.184 18:21:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:41.184 18:21:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77396 00:12:41.184 18:21:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:41.184 killing process with pid 77396 00:12:41.184 18:21:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:41.184 18:21:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77396' 00:12:41.184 18:21:39 -- common/autotest_common.sh@955 -- # kill 77396 00:12:41.184 18:21:39 -- common/autotest_common.sh@960 -- # wait 77396 00:12:41.444 18:21:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.444 18:21:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.444 18:21:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.444 18:21:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.444 18:21:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.444 18:21:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.444 18:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.444 18:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.444 18:21:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:41.444 18:21:39 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:41.444 00:12:41.444 real 0m14.136s 00:12:41.444 user 0m19.697s 00:12:41.444 sys 0m5.318s 00:12:41.444 18:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.444 ************************************ 00:12:41.444 END TEST nvmf_fips 00:12:41.444 ************************************ 00:12:41.444 18:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:41.444 18:21:39 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:41.444 18:21:39 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:41.444 18:21:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:41.444 18:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.444 18:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:41.444 ************************************ 00:12:41.444 START TEST nvmf_fuzz 00:12:41.444 ************************************ 00:12:41.444 18:21:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:41.444 * Looking for test storage... 00:12:41.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:41.444 18:21:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:41.444 18:21:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:41.444 18:21:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.704 18:21:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.704 18:21:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.704 18:21:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.704 18:21:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.704 18:21:39 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.704 18:21:39 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.704 18:21:39 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.704 18:21:39 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.704 18:21:39 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.704 18:21:39 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.704 18:21:39 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.704 18:21:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.704 18:21:39 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.704 18:21:39 -- scripts/common.sh@344 -- # : 1 00:12:41.704 18:21:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.704 18:21:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.704 18:21:39 -- scripts/common.sh@364 -- # decimal 1 00:12:41.704 18:21:39 -- scripts/common.sh@352 -- # local d=1 00:12:41.704 18:21:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.704 18:21:39 -- scripts/common.sh@354 -- # echo 1 00:12:41.704 18:21:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.704 18:21:39 -- scripts/common.sh@365 -- # decimal 2 00:12:41.704 18:21:39 -- scripts/common.sh@352 -- # local d=2 00:12:41.704 18:21:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.704 18:21:39 -- scripts/common.sh@354 -- # echo 2 00:12:41.704 18:21:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.704 18:21:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.704 18:21:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.704 18:21:39 -- scripts/common.sh@367 -- # return 0 00:12:41.704 18:21:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.704 18:21:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.704 --rc genhtml_branch_coverage=1 00:12:41.704 --rc genhtml_function_coverage=1 00:12:41.704 --rc genhtml_legend=1 00:12:41.704 --rc geninfo_all_blocks=1 00:12:41.704 --rc geninfo_unexecuted_blocks=1 00:12:41.704 00:12:41.704 ' 00:12:41.704 18:21:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.704 --rc genhtml_branch_coverage=1 00:12:41.704 --rc genhtml_function_coverage=1 00:12:41.704 --rc genhtml_legend=1 00:12:41.704 --rc geninfo_all_blocks=1 00:12:41.704 --rc geninfo_unexecuted_blocks=1 00:12:41.704 00:12:41.704 ' 00:12:41.704 18:21:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.704 --rc genhtml_branch_coverage=1 00:12:41.704 --rc genhtml_function_coverage=1 00:12:41.704 --rc genhtml_legend=1 00:12:41.704 --rc geninfo_all_blocks=1 00:12:41.704 --rc geninfo_unexecuted_blocks=1 00:12:41.704 00:12:41.704 ' 00:12:41.704 18:21:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.704 --rc genhtml_branch_coverage=1 00:12:41.704 --rc genhtml_function_coverage=1 00:12:41.704 --rc genhtml_legend=1 00:12:41.704 --rc geninfo_all_blocks=1 00:12:41.704 --rc geninfo_unexecuted_blocks=1 00:12:41.704 00:12:41.704 ' 00:12:41.704 18:21:39 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.704 18:21:39 -- nvmf/common.sh@7 -- # uname -s 00:12:41.704 18:21:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.704 18:21:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.704 18:21:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.704 18:21:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.704 18:21:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.704 18:21:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.704 18:21:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.704 18:21:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.704 18:21:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.704 18:21:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.704 18:21:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:12:41.704 18:21:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:12:41.704 18:21:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.704 18:21:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.704 18:21:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.704 18:21:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.704 18:21:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.704 18:21:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.704 18:21:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.704 18:21:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.704 18:21:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.704 18:21:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.704 18:21:39 -- paths/export.sh@5 -- # export PATH 00:12:41.704 18:21:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.704 18:21:39 -- nvmf/common.sh@46 -- # : 0 00:12:41.704 18:21:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.704 18:21:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.704 18:21:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.704 18:21:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.704 18:21:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.704 18:21:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.704 18:21:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.704 18:21:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.705 18:21:39 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:41.705 18:21:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.705 18:21:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.705 18:21:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.705 18:21:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.705 18:21:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.705 18:21:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.705 18:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.705 18:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.705 18:21:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:41.705 18:21:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:41.705 18:21:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:41.705 18:21:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:41.705 18:21:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:41.705 18:21:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:41.705 18:21:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.705 18:21:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.705 18:21:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.705 18:21:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:41.705 18:21:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.705 18:21:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.705 18:21:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.705 18:21:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.705 18:21:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.705 18:21:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.705 18:21:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.705 18:21:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.705 18:21:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:41.705 18:21:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:41.705 Cannot find device "nvmf_tgt_br" 00:12:41.705 18:21:39 -- nvmf/common.sh@154 -- # true 00:12:41.705 18:21:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.705 Cannot find device "nvmf_tgt_br2" 00:12:41.705 18:21:39 -- nvmf/common.sh@155 -- # true 00:12:41.705 18:21:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:41.705 18:21:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:41.705 Cannot find device "nvmf_tgt_br" 00:12:41.705 18:21:39 -- nvmf/common.sh@157 -- # true 00:12:41.705 18:21:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:41.705 Cannot find device "nvmf_tgt_br2" 00:12:41.705 18:21:39 -- nvmf/common.sh@158 -- # true 00:12:41.705 18:21:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:41.705 18:21:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:41.705 18:21:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.705 18:21:39 -- nvmf/common.sh@161 -- # true 00:12:41.705 18:21:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.705 18:21:39 -- nvmf/common.sh@162 -- # true 00:12:41.705 18:21:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.705 18:21:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.705 18:21:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.705 18:21:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.965 18:21:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.965 18:21:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.965 18:21:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.965 18:21:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:41.965 18:21:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:41.965 18:21:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:41.965 18:21:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:41.965 18:21:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:41.965 18:21:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:41.965 18:21:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.965 18:21:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.965 18:21:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.965 18:21:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:41.965 18:21:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:41.965 18:21:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:41.965 18:21:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:41.965 18:21:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:41.965 18:21:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:41.965 18:21:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:41.965 18:21:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:41.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:41.965 00:12:41.965 --- 10.0.0.2 ping statistics --- 00:12:41.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.965 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:41.965 18:21:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:41.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:41.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:12:41.965 00:12:41.965 --- 10.0.0.3 ping statistics --- 00:12:41.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.965 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:41.965 18:21:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:41.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:41.965 00:12:41.965 --- 10.0.0.1 ping statistics --- 00:12:41.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.965 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:41.965 18:21:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.965 18:21:40 -- nvmf/common.sh@421 -- # return 0 00:12:41.965 18:21:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:41.965 18:21:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.965 18:21:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:41.965 18:21:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:41.965 18:21:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.965 18:21:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:41.965 18:21:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:41.965 18:21:40 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77764 00:12:41.965 18:21:40 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:41.965 18:21:40 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:41.965 18:21:40 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77764 00:12:41.965 18:21:40 -- common/autotest_common.sh@829 -- # '[' -z 77764 ']' 00:12:41.965 18:21:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.965 18:21:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.965 18:21:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.965 18:21:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.965 18:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 18:21:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.342 18:21:41 -- common/autotest_common.sh@862 -- # return 0 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.342 18:21:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.342 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 18:21:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:43.342 18:21:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.342 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 Malloc0 00:12:43.342 18:21:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:43.342 18:21:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.342 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 18:21:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:43.342 18:21:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.342 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 18:21:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.342 18:21:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.342 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 18:21:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:43.342 Shutting down the fuzz application 00:12:43.342 18:21:41 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:43.602 Shutting down the fuzz application 00:12:43.602 18:21:41 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.602 18:21:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.602 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.861 18:21:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.861 18:21:41 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:43.861 18:21:41 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:43.861 18:21:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:43.861 18:21:41 -- nvmf/common.sh@116 -- # sync 00:12:43.861 18:21:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:43.861 18:21:41 -- nvmf/common.sh@119 -- # set +e 00:12:43.861 18:21:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:43.861 18:21:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:43.861 rmmod nvme_tcp 00:12:43.861 rmmod nvme_fabrics 00:12:43.861 rmmod nvme_keyring 00:12:43.861 18:21:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:43.861 18:21:41 -- nvmf/common.sh@123 -- # set -e 00:12:43.861 18:21:41 -- nvmf/common.sh@124 -- # return 0 00:12:43.861 18:21:41 -- nvmf/common.sh@477 -- # '[' -n 77764 ']' 00:12:43.861 18:21:41 -- nvmf/common.sh@478 -- # killprocess 77764 00:12:43.861 18:21:41 -- common/autotest_common.sh@936 -- # '[' -z 77764 ']' 00:12:43.861 18:21:41 -- common/autotest_common.sh@940 -- # kill -0 77764 00:12:43.861 18:21:41 -- common/autotest_common.sh@941 -- # uname 00:12:43.861 18:21:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:43.861 18:21:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77764 00:12:43.861 18:21:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:43.861 18:21:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:43.861 killing process with pid 77764 00:12:43.861 18:21:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77764' 00:12:43.861 18:21:42 -- common/autotest_common.sh@955 -- # kill 77764 00:12:43.861 18:21:42 -- common/autotest_common.sh@960 -- # wait 77764 00:12:44.121 18:21:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:44.121 18:21:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:44.121 18:21:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:44.121 18:21:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.121 18:21:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:44.121 18:21:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.121 18:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.121 18:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.121 18:21:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:44.121 18:21:42 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:44.121 00:12:44.121 real 0m2.680s 00:12:44.121 user 0m2.766s 00:12:44.121 sys 0m0.585s 00:12:44.121 18:21:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:44.121 18:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.121 ************************************ 00:12:44.121 END TEST nvmf_fuzz 00:12:44.121 ************************************ 00:12:44.121 18:21:42 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:44.121 18:21:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:44.121 18:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.121 18:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.121 ************************************ 00:12:44.121 START TEST nvmf_multiconnection 00:12:44.121 ************************************ 00:12:44.121 18:21:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:44.121 * Looking for test storage... 00:12:44.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:44.121 18:21:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:44.121 18:21:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:44.121 18:21:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:44.381 18:21:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:44.381 18:21:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:44.381 18:21:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:44.381 18:21:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:44.381 18:21:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:44.381 18:21:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:44.381 18:21:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.381 18:21:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:44.381 18:21:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:44.381 18:21:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:44.381 18:21:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:44.381 18:21:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:44.381 18:21:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:44.381 18:21:42 -- scripts/common.sh@344 -- # : 1 00:12:44.381 18:21:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:44.381 18:21:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.381 18:21:42 -- scripts/common.sh@364 -- # decimal 1 00:12:44.381 18:21:42 -- scripts/common.sh@352 -- # local d=1 00:12:44.381 18:21:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.381 18:21:42 -- scripts/common.sh@354 -- # echo 1 00:12:44.381 18:21:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:44.381 18:21:42 -- scripts/common.sh@365 -- # decimal 2 00:12:44.381 18:21:42 -- scripts/common.sh@352 -- # local d=2 00:12:44.381 18:21:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.381 18:21:42 -- scripts/common.sh@354 -- # echo 2 00:12:44.381 18:21:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:44.381 18:21:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:44.381 18:21:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:44.381 18:21:42 -- scripts/common.sh@367 -- # return 0 00:12:44.381 18:21:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.381 18:21:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.381 --rc genhtml_branch_coverage=1 00:12:44.381 --rc genhtml_function_coverage=1 00:12:44.381 --rc genhtml_legend=1 00:12:44.381 --rc geninfo_all_blocks=1 00:12:44.381 --rc geninfo_unexecuted_blocks=1 00:12:44.381 00:12:44.381 ' 00:12:44.381 18:21:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.381 --rc genhtml_branch_coverage=1 00:12:44.381 --rc genhtml_function_coverage=1 00:12:44.381 --rc genhtml_legend=1 00:12:44.381 --rc geninfo_all_blocks=1 00:12:44.381 --rc geninfo_unexecuted_blocks=1 00:12:44.381 00:12:44.381 ' 00:12:44.381 18:21:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.381 --rc genhtml_branch_coverage=1 00:12:44.381 --rc genhtml_function_coverage=1 00:12:44.381 --rc genhtml_legend=1 00:12:44.381 --rc geninfo_all_blocks=1 00:12:44.381 --rc geninfo_unexecuted_blocks=1 00:12:44.381 00:12:44.381 ' 00:12:44.381 18:21:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.381 --rc genhtml_branch_coverage=1 00:12:44.381 --rc genhtml_function_coverage=1 00:12:44.381 --rc genhtml_legend=1 00:12:44.381 --rc geninfo_all_blocks=1 00:12:44.381 --rc geninfo_unexecuted_blocks=1 00:12:44.381 00:12:44.381 ' 00:12:44.381 18:21:42 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:44.381 18:21:42 -- nvmf/common.sh@7 -- # uname -s 00:12:44.381 18:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.381 18:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.381 18:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.381 18:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.381 18:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.381 18:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.381 18:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.381 18:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.381 18:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.381 18:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.381 18:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:12:44.381 18:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:12:44.381 18:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.381 18:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.381 18:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:44.381 18:21:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.381 18:21:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.381 18:21:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.381 18:21:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.381 18:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.381 18:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.381 18:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.381 18:21:42 -- paths/export.sh@5 -- # export PATH 00:12:44.381 18:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.381 18:21:42 -- nvmf/common.sh@46 -- # : 0 00:12:44.381 18:21:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:44.381 18:21:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:44.381 18:21:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:44.381 18:21:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.381 18:21:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.381 18:21:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:44.381 18:21:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:44.381 18:21:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:44.381 18:21:42 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.381 18:21:42 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.381 18:21:42 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:44.381 18:21:42 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:44.381 18:21:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:44.381 18:21:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.381 18:21:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:44.381 18:21:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:44.381 18:21:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:44.381 18:21:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.381 18:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.381 18:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.381 18:21:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:44.381 18:21:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:44.381 18:21:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:44.381 18:21:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:44.381 18:21:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:44.381 18:21:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:44.381 18:21:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.381 18:21:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.381 18:21:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:44.381 18:21:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:44.381 18:21:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:44.381 18:21:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:44.381 18:21:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:44.382 18:21:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.382 18:21:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:44.382 18:21:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:44.382 18:21:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:44.382 18:21:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:44.382 18:21:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:44.382 18:21:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:44.382 Cannot find device "nvmf_tgt_br" 00:12:44.382 18:21:42 -- nvmf/common.sh@154 -- # true 00:12:44.382 18:21:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:44.382 Cannot find device "nvmf_tgt_br2" 00:12:44.382 18:21:42 -- nvmf/common.sh@155 -- # true 00:12:44.382 18:21:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:44.382 18:21:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:44.382 Cannot find device "nvmf_tgt_br" 00:12:44.382 18:21:42 -- nvmf/common.sh@157 -- # true 00:12:44.382 18:21:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:44.382 Cannot find device "nvmf_tgt_br2" 00:12:44.382 18:21:42 -- nvmf/common.sh@158 -- # true 00:12:44.382 18:21:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:44.382 18:21:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:44.641 18:21:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:44.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:44.641 18:21:42 -- nvmf/common.sh@161 -- # true 00:12:44.641 18:21:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:44.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:44.641 18:21:42 -- nvmf/common.sh@162 -- # true 00:12:44.641 18:21:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:44.641 18:21:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:44.641 18:21:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:44.641 18:21:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:44.641 18:21:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:44.641 18:21:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:44.641 18:21:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:44.641 18:21:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:44.641 18:21:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:44.641 18:21:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:44.641 18:21:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:44.641 18:21:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:44.641 18:21:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:44.641 18:21:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:44.641 18:21:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:44.641 18:21:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:44.641 18:21:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:44.641 18:21:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:44.641 18:21:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:44.641 18:21:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:44.641 18:21:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:44.641 18:21:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:44.641 18:21:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:44.641 18:21:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:44.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:12:44.641 00:12:44.641 --- 10.0.0.2 ping statistics --- 00:12:44.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.641 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:44.641 18:21:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:44.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:44.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:12:44.641 00:12:44.641 --- 10.0.0.3 ping statistics --- 00:12:44.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.641 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:44.641 18:21:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:44.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:12:44.641 00:12:44.641 --- 10.0.0.1 ping statistics --- 00:12:44.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.641 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:44.641 18:21:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.641 18:21:42 -- nvmf/common.sh@421 -- # return 0 00:12:44.641 18:21:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:44.641 18:21:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.641 18:21:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:44.641 18:21:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:44.641 18:21:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.641 18:21:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:44.641 18:21:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:44.641 18:21:42 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:44.641 18:21:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:44.641 18:21:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.641 18:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.641 18:21:42 -- nvmf/common.sh@469 -- # nvmfpid=77961 00:12:44.641 18:21:42 -- nvmf/common.sh@470 -- # waitforlisten 77961 00:12:44.641 18:21:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.641 18:21:42 -- common/autotest_common.sh@829 -- # '[' -z 77961 ']' 00:12:44.641 18:21:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.641 18:21:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.641 18:21:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.641 18:21:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.641 18:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.901 [2024-11-17 18:21:42.917478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:44.901 [2024-11-17 18:21:42.917586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.901 [2024-11-17 18:21:43.056419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.901 [2024-11-17 18:21:43.091187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:44.901 [2024-11-17 18:21:43.091369] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.901 [2024-11-17 18:21:43.091385] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.901 [2024-11-17 18:21:43.091393] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.901 [2024-11-17 18:21:43.091507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.901 [2024-11-17 18:21:43.091652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.901 [2024-11-17 18:21:43.091710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.901 [2024-11-17 18:21:43.091714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.839 18:21:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.839 18:21:43 -- common/autotest_common.sh@862 -- # return 0 00:12:45.839 18:21:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:45.839 18:21:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:45.839 18:21:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.839 18:21:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.839 18:21:44 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.839 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.839 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.839 [2024-11-17 18:21:44.030171] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.839 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.839 18:21:44 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:45.839 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:45.839 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:45.839 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.839 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.839 Malloc1 00:12:45.839 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.839 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:45.839 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.839 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.839 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.839 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.839 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.839 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.839 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.839 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.839 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.839 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.839 [2024-11-17 18:21:44.101442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.099 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 Malloc2 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.099 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 Malloc3 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.099 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 Malloc4 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.099 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 Malloc5 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.099 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 Malloc6 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.099 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 Malloc7 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.099 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.099 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.099 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:46.099 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.100 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.100 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.100 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.100 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:46.100 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.100 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 Malloc8 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.359 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 Malloc9 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.359 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 Malloc10 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.359 18:21:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 Malloc11 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:46.359 18:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.359 18:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:46.359 18:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.359 18:21:44 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:46.359 18:21:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.359 18:21:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.618 18:21:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:46.618 18:21:44 -- common/autotest_common.sh@1187 -- # local i=0 00:12:46.618 18:21:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.618 18:21:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:46.618 18:21:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:48.535 18:21:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:48.535 18:21:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:48.535 18:21:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:48.535 18:21:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:48.535 18:21:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.535 18:21:46 -- common/autotest_common.sh@1197 -- # return 0 00:12:48.535 18:21:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:48.535 18:21:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:48.807 18:21:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:48.807 18:21:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:48.807 18:21:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.807 18:21:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:48.807 18:21:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:50.711 18:21:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:50.711 18:21:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:50.711 18:21:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:50.711 18:21:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:50.711 18:21:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.711 18:21:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:50.711 18:21:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:50.711 18:21:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:50.969 18:21:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:50.969 18:21:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:50.969 18:21:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.969 18:21:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:50.969 18:21:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:52.870 18:21:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:52.870 18:21:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:52.870 18:21:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:52.870 18:21:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:52.870 18:21:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.870 18:21:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:52.870 18:21:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.870 18:21:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:52.870 18:21:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:52.871 18:21:51 -- common/autotest_common.sh@1187 -- # local i=0 00:12:53.129 18:21:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.129 18:21:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:53.129 18:21:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:55.033 18:21:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:55.033 18:21:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:55.033 18:21:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:55.033 18:21:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:55.033 18:21:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.033 18:21:53 -- common/autotest_common.sh@1197 -- # return 0 00:12:55.033 18:21:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:55.034 18:21:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:55.034 18:21:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:55.034 18:21:53 -- common/autotest_common.sh@1187 -- # local i=0 00:12:55.034 18:21:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.034 18:21:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:55.034 18:21:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:57.567 18:21:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:57.567 18:21:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:57.567 18:21:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:57.567 18:21:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:57.567 18:21:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.567 18:21:55 -- common/autotest_common.sh@1197 -- # return 0 00:12:57.567 18:21:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:57.567 18:21:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:57.567 18:21:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:57.567 18:21:55 -- common/autotest_common.sh@1187 -- # local i=0 00:12:57.567 18:21:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.567 18:21:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:57.567 18:21:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:59.470 18:21:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:59.470 18:21:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:59.470 18:21:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:59.470 18:21:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:59.470 18:21:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.470 18:21:57 -- common/autotest_common.sh@1197 -- # return 0 00:12:59.470 18:21:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:59.470 18:21:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:59.470 18:21:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:59.470 18:21:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:59.470 18:21:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.470 18:21:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:59.470 18:21:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:01.373 18:21:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:01.373 18:21:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:01.373 18:21:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:13:01.633 18:21:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:01.633 18:21:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.633 18:21:59 -- common/autotest_common.sh@1197 -- # return 0 00:13:01.633 18:21:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:01.633 18:21:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:01.633 18:21:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:01.633 18:21:59 -- common/autotest_common.sh@1187 -- # local i=0 00:13:01.633 18:21:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.633 18:21:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:01.633 18:21:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:03.540 18:22:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:03.798 18:22:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:03.799 18:22:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:13:03.799 18:22:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:03.799 18:22:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.799 18:22:01 -- common/autotest_common.sh@1197 -- # return 0 00:13:03.799 18:22:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.799 18:22:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:03.799 18:22:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:03.799 18:22:01 -- common/autotest_common.sh@1187 -- # local i=0 00:13:03.799 18:22:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.799 18:22:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:03.799 18:22:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:06.364 18:22:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:06.364 18:22:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:06.364 18:22:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:13:06.364 18:22:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:06.364 18:22:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.364 18:22:03 -- common/autotest_common.sh@1197 -- # return 0 00:13:06.364 18:22:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:06.364 18:22:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:06.364 18:22:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:06.364 18:22:04 -- common/autotest_common.sh@1187 -- # local i=0 00:13:06.364 18:22:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.364 18:22:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:06.364 18:22:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:08.263 18:22:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:08.263 18:22:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:08.263 18:22:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:13:08.263 18:22:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:08.263 18:22:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.263 18:22:06 -- common/autotest_common.sh@1197 -- # return 0 00:13:08.263 18:22:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:08.263 18:22:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:08.263 18:22:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:08.263 18:22:06 -- common/autotest_common.sh@1187 -- # local i=0 00:13:08.263 18:22:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.263 18:22:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:08.263 18:22:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:10.203 18:22:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:10.203 18:22:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:10.203 18:22:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:13:10.203 18:22:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:10.203 18:22:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.203 18:22:08 -- common/autotest_common.sh@1197 -- # return 0 00:13:10.203 18:22:08 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:10.203 [global] 00:13:10.203 thread=1 00:13:10.203 invalidate=1 00:13:10.203 rw=read 00:13:10.203 time_based=1 00:13:10.203 runtime=10 00:13:10.203 ioengine=libaio 00:13:10.203 direct=1 00:13:10.203 bs=262144 00:13:10.203 iodepth=64 00:13:10.203 norandommap=1 00:13:10.203 numjobs=1 00:13:10.203 00:13:10.203 [job0] 00:13:10.203 filename=/dev/nvme0n1 00:13:10.203 [job1] 00:13:10.203 filename=/dev/nvme10n1 00:13:10.203 [job2] 00:13:10.203 filename=/dev/nvme1n1 00:13:10.203 [job3] 00:13:10.203 filename=/dev/nvme2n1 00:13:10.203 [job4] 00:13:10.203 filename=/dev/nvme3n1 00:13:10.203 [job5] 00:13:10.203 filename=/dev/nvme4n1 00:13:10.203 [job6] 00:13:10.203 filename=/dev/nvme5n1 00:13:10.203 [job7] 00:13:10.203 filename=/dev/nvme6n1 00:13:10.203 [job8] 00:13:10.203 filename=/dev/nvme7n1 00:13:10.203 [job9] 00:13:10.203 filename=/dev/nvme8n1 00:13:10.203 [job10] 00:13:10.203 filename=/dev/nvme9n1 00:13:10.462 Could not set queue depth (nvme0n1) 00:13:10.462 Could not set queue depth (nvme10n1) 00:13:10.462 Could not set queue depth (nvme1n1) 00:13:10.462 Could not set queue depth (nvme2n1) 00:13:10.462 Could not set queue depth (nvme3n1) 00:13:10.462 Could not set queue depth (nvme4n1) 00:13:10.462 Could not set queue depth (nvme5n1) 00:13:10.462 Could not set queue depth (nvme6n1) 00:13:10.462 Could not set queue depth (nvme7n1) 00:13:10.462 Could not set queue depth (nvme8n1) 00:13:10.462 Could not set queue depth (nvme9n1) 00:13:10.462 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:10.462 fio-3.35 00:13:10.462 Starting 11 threads 00:13:22.671 00:13:22.671 job0: (groupid=0, jobs=1): err= 0: pid=78421: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=448, BW=112MiB/s (117MB/s)(1136MiB/10143msec) 00:13:22.671 slat (usec): min=20, max=98062, avg=2193.63, stdev=7138.20 00:13:22.671 clat (msec): min=31, max=299, avg=140.43, stdev=52.00 00:13:22.671 lat (msec): min=31, max=299, avg=142.63, stdev=53.06 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 59], 5.00th=[ 73], 10.00th=[ 82], 20.00th=[ 87], 00:13:22.671 | 30.00th=[ 91], 40.00th=[ 96], 50.00th=[ 167], 60.00th=[ 174], 00:13:22.671 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 205], 00:13:22.671 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 300], 00:13:22.671 | 99.99th=[ 300] 00:13:22.671 bw ( KiB/s): min=72192, max=192000, per=5.59%, avg=114722.85, stdev=43777.56, samples=20 00:13:22.671 iops : min= 282, max= 750, avg=448.10, stdev=170.98, samples=20 00:13:22.671 lat (msec) : 50=0.18%, 100=43.43%, 250=55.62%, 500=0.77% 00:13:22.671 cpu : usr=0.19%, sys=1.81%, ctx=1110, majf=0, minf=4097 00:13:22.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:22.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.671 issued rwts: total=4545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.671 job1: (groupid=0, jobs=1): err= 0: pid=78426: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=1003, BW=251MiB/s (263MB/s)(2517MiB/10029msec) 00:13:22.671 slat (usec): min=20, max=49777, avg=988.49, stdev=2179.52 00:13:22.671 clat (msec): min=23, max=109, avg=62.65, stdev= 7.67 00:13:22.671 lat (msec): min=23, max=109, avg=63.64, stdev= 7.70 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:13:22.671 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 64], 00:13:22.671 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 74], 00:13:22.671 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 104], 00:13:22.671 | 99.99th=[ 110] 00:13:22.671 bw ( KiB/s): min=178020, max=271360, per=12.48%, avg=256172.65, stdev=19269.98, samples=20 00:13:22.671 iops : min= 695, max= 1060, avg=1000.65, stdev=75.35, samples=20 00:13:22.671 lat (msec) : 50=2.20%, 100=97.55%, 250=0.26% 00:13:22.671 cpu : usr=0.46%, sys=3.79%, ctx=2197, majf=0, minf=4097 00:13:22.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:22.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.671 issued rwts: total=10068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.671 job2: (groupid=0, jobs=1): err= 0: pid=78428: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=369, BW=92.4MiB/s (96.9MB/s)(937MiB/10138msec) 00:13:22.671 slat (usec): min=19, max=58225, avg=2663.91, stdev=7038.44 00:13:22.671 clat (msec): min=37, max=335, avg=170.17, stdev=29.16 00:13:22.671 lat (msec): min=37, max=335, avg=172.84, stdev=30.08 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 53], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 146], 00:13:22.671 | 30.00th=[ 153], 40.00th=[ 167], 50.00th=[ 171], 60.00th=[ 176], 00:13:22.671 | 70.00th=[ 184], 80.00th=[ 197], 90.00th=[ 203], 95.00th=[ 207], 00:13:22.671 | 99.00th=[ 243], 99.50th=[ 279], 99.90th=[ 296], 99.95th=[ 296], 00:13:22.671 | 99.99th=[ 334] 00:13:22.671 bw ( KiB/s): min=73728, max=120832, per=4.59%, avg=94310.10, stdev=13043.01, samples=20 00:13:22.671 iops : min= 288, max= 472, avg=368.35, stdev=50.99, samples=20 00:13:22.671 lat (msec) : 50=0.45%, 100=1.20%, 250=97.57%, 500=0.77% 00:13:22.671 cpu : usr=0.13%, sys=1.30%, ctx=953, majf=0, minf=4097 00:13:22.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:13:22.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.671 issued rwts: total=3748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.671 job3: (groupid=0, jobs=1): err= 0: pid=78429: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=1201, BW=300MiB/s (315MB/s)(3007MiB/10010msec) 00:13:22.671 slat (usec): min=20, max=47966, avg=826.83, stdev=1957.83 00:13:22.671 clat (msec): min=7, max=125, avg=52.36, stdev=21.19 00:13:22.671 lat (msec): min=10, max=125, avg=53.19, stdev=21.50 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:13:22.671 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 53], 60.00th=[ 61], 00:13:22.671 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 87], 95.00th=[ 92], 00:13:22.671 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 112], 99.95th=[ 116], 00:13:22.671 | 99.99th=[ 126] 00:13:22.671 bw ( KiB/s): min=173568, max=492544, per=14.47%, avg=297202.26, stdev=121152.67, samples=19 00:13:22.671 iops : min= 678, max= 1924, avg=1160.89, stdev=473.27, samples=19 00:13:22.671 lat (msec) : 10=0.01%, 20=0.17%, 50=48.73%, 100=50.27%, 250=0.82% 00:13:22.671 cpu : usr=0.59%, sys=4.42%, ctx=2606, majf=0, minf=4097 00:13:22.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:22.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.671 issued rwts: total=12028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.671 job4: (groupid=0, jobs=1): err= 0: pid=78430: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=366, BW=91.6MiB/s (96.1MB/s)(929MiB/10135msec) 00:13:22.671 slat (usec): min=20, max=82557, avg=2698.21, stdev=7806.45 00:13:22.671 clat (msec): min=38, max=323, avg=171.65, stdev=29.64 00:13:22.671 lat (msec): min=38, max=323, avg=174.35, stdev=30.59 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 106], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 146], 00:13:22.671 | 30.00th=[ 153], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:13:22.671 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 203], 95.00th=[ 209], 00:13:22.671 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 326], 00:13:22.671 | 99.99th=[ 326] 00:13:22.671 bw ( KiB/s): min=68608, max=114688, per=4.55%, avg=93450.85, stdev=13507.36, samples=20 00:13:22.671 iops : min= 268, max= 448, avg=364.95, stdev=52.75, samples=20 00:13:22.671 lat (msec) : 50=0.57%, 100=0.32%, 250=97.39%, 500=1.72% 00:13:22.671 cpu : usr=0.20%, sys=1.48%, ctx=891, majf=0, minf=4097 00:13:22.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:13:22.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.671 issued rwts: total=3715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.671 job5: (groupid=0, jobs=1): err= 0: pid=78431: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=710, BW=178MiB/s (186MB/s)(1801MiB/10137msec) 00:13:22.671 slat (usec): min=19, max=146224, avg=1378.48, stdev=6301.75 00:13:22.671 clat (msec): min=15, max=343, avg=88.56, stdev=47.66 00:13:22.671 lat (msec): min=15, max=343, avg=89.93, stdev=48.67 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 46], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 61], 00:13:22.671 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 77], 00:13:22.671 | 70.00th=[ 88], 80.00th=[ 94], 90.00th=[ 199], 95.00th=[ 203], 00:13:22.671 | 99.00th=[ 209], 99.50th=[ 218], 99.90th=[ 313], 99.95th=[ 321], 00:13:22.671 | 99.99th=[ 342] 00:13:22.671 bw ( KiB/s): min=69632, max=264704, per=8.90%, avg=182760.35, stdev=75554.73, samples=20 00:13:22.671 iops : min= 272, max= 1034, avg=713.85, stdev=295.22, samples=20 00:13:22.671 lat (msec) : 20=0.08%, 50=1.82%, 100=83.01%, 250=14.80%, 500=0.29% 00:13:22.671 cpu : usr=0.35%, sys=2.85%, ctx=1651, majf=0, minf=4097 00:13:22.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:22.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.671 issued rwts: total=7203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.671 job6: (groupid=0, jobs=1): err= 0: pid=78432: Sun Nov 17 18:22:19 2024 00:13:22.671 read: IOPS=369, BW=92.3MiB/s (96.7MB/s)(935MiB/10134msec) 00:13:22.671 slat (usec): min=16, max=57198, avg=2657.65, stdev=7384.95 00:13:22.671 clat (msec): min=14, max=364, avg=170.53, stdev=31.11 00:13:22.671 lat (msec): min=15, max=364, avg=173.19, stdev=32.09 00:13:22.671 clat percentiles (msec): 00:13:22.671 | 1.00th=[ 61], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 146], 00:13:22.671 | 30.00th=[ 153], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:13:22.671 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 203], 95.00th=[ 209], 00:13:22.671 | 99.00th=[ 241], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 363], 00:13:22.671 | 99.99th=[ 363] 00:13:22.671 bw ( KiB/s): min=77824, max=119808, per=4.58%, avg=94114.65, stdev=13471.95, samples=20 00:13:22.671 iops : min= 304, max= 468, avg=367.60, stdev=52.63, samples=20 00:13:22.671 lat (msec) : 20=0.19%, 50=0.80%, 100=1.68%, 250=96.50%, 500=0.83% 00:13:22.671 cpu : usr=0.19%, sys=1.36%, ctx=914, majf=0, minf=4098 00:13:22.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:13:22.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.672 issued rwts: total=3740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.672 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.672 job7: (groupid=0, jobs=1): err= 0: pid=78433: Sun Nov 17 18:22:19 2024 00:13:22.672 read: IOPS=1874, BW=469MiB/s (491MB/s)(4692MiB/10011msec) 00:13:22.672 slat (usec): min=20, max=7407, avg=528.69, stdev=1041.92 00:13:22.672 clat (usec): min=9563, max=50053, avg=33562.51, stdev=2188.18 00:13:22.672 lat (usec): min=13793, max=51155, avg=34091.20, stdev=2202.85 00:13:22.672 clat percentiles (usec): 00:13:22.672 | 1.00th=[28705], 5.00th=[30540], 10.00th=[31327], 20.00th=[32113], 00:13:22.672 | 30.00th=[32637], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:13:22.672 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35914], 95.00th=[36963], 00:13:22.672 | 99.00th=[39060], 99.50th=[40633], 99.90th=[46400], 99.95th=[46924], 00:13:22.672 | 99.99th=[50070] 00:13:22.672 bw ( KiB/s): min=459264, max=488960, per=23.32%, avg=478799.30, stdev=7321.92, samples=20 00:13:22.672 iops : min= 1794, max= 1910, avg=1870.30, stdev=28.59, samples=20 00:13:22.672 lat (msec) : 10=0.01%, 20=0.20%, 50=99.79%, 100=0.01% 00:13:22.672 cpu : usr=0.79%, sys=5.93%, ctx=4038, majf=0, minf=4097 00:13:22.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:22.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.672 issued rwts: total=18768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.672 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.672 job8: (groupid=0, jobs=1): err= 0: pid=78434: Sun Nov 17 18:22:19 2024 00:13:22.672 read: IOPS=367, BW=92.0MiB/s (96.4MB/s)(933MiB/10143msec) 00:13:22.672 slat (usec): min=20, max=67250, avg=2688.72, stdev=7355.54 00:13:22.672 clat (msec): min=24, max=341, avg=171.00, stdev=29.08 00:13:22.672 lat (msec): min=24, max=341, avg=173.69, stdev=30.03 00:13:22.672 clat percentiles (msec): 00:13:22.672 | 1.00th=[ 68], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 146], 00:13:22.672 | 30.00th=[ 153], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:13:22.672 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 203], 95.00th=[ 209], 00:13:22.672 | 99.00th=[ 245], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 342], 00:13:22.672 | 99.99th=[ 342] 00:13:22.672 bw ( KiB/s): min=73216, max=114688, per=4.57%, avg=93866.15, stdev=12724.41, samples=20 00:13:22.672 iops : min= 286, max= 448, avg=366.65, stdev=49.71, samples=20 00:13:22.672 lat (msec) : 50=0.40%, 100=0.88%, 250=97.88%, 500=0.83% 00:13:22.672 cpu : usr=0.17%, sys=1.47%, ctx=909, majf=0, minf=4097 00:13:22.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:13:22.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.672 issued rwts: total=3731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.672 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.672 job9: (groupid=0, jobs=1): err= 0: pid=78435: Sun Nov 17 18:22:19 2024 00:13:22.672 read: IOPS=1006, BW=252MiB/s (264MB/s)(2523MiB/10029msec) 00:13:22.672 slat (usec): min=16, max=16647, avg=971.20, stdev=2096.61 00:13:22.672 clat (msec): min=21, max=109, avg=62.55, stdev= 7.08 00:13:22.672 lat (msec): min=21, max=110, avg=63.52, stdev= 7.08 00:13:22.672 clat percentiles (msec): 00:13:22.672 | 1.00th=[ 48], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:13:22.672 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 64], 00:13:22.672 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 73], 00:13:22.672 | 99.00th=[ 90], 99.50th=[ 94], 99.90th=[ 104], 99.95th=[ 107], 00:13:22.672 | 99.99th=[ 109] 00:13:22.672 bw ( KiB/s): min=187392, max=273920, per=12.50%, avg=256715.85, stdev=17264.58, samples=20 00:13:22.672 iops : min= 732, max= 1070, avg=1002.75, stdev=67.42, samples=20 00:13:22.672 lat (msec) : 50=1.88%, 100=97.95%, 250=0.17% 00:13:22.672 cpu : usr=0.37%, sys=3.65%, ctx=2253, majf=0, minf=4097 00:13:22.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:22.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.672 issued rwts: total=10092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.672 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.672 job10: (groupid=0, jobs=1): err= 0: pid=78436: Sun Nov 17 18:22:19 2024 00:13:22.672 read: IOPS=366, BW=91.6MiB/s (96.1MB/s)(929MiB/10137msec) 00:13:22.672 slat (usec): min=16, max=70478, avg=2693.49, stdev=7654.48 00:13:22.672 clat (msec): min=26, max=333, avg=171.63, stdev=29.24 00:13:22.672 lat (msec): min=27, max=333, avg=174.32, stdev=30.25 00:13:22.672 clat percentiles (msec): 00:13:22.672 | 1.00th=[ 95], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 146], 00:13:22.672 | 30.00th=[ 153], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:13:22.672 | 70.00th=[ 182], 80.00th=[ 199], 90.00th=[ 203], 95.00th=[ 207], 00:13:22.672 | 99.00th=[ 257], 99.50th=[ 296], 99.90th=[ 326], 99.95th=[ 334], 00:13:22.672 | 99.99th=[ 334] 00:13:22.672 bw ( KiB/s): min=76288, max=114176, per=4.55%, avg=93456.55, stdev=12298.66, samples=20 00:13:22.672 iops : min= 298, max= 446, avg=365.05, stdev=48.05, samples=20 00:13:22.672 lat (msec) : 50=0.30%, 100=1.16%, 250=97.23%, 500=1.32% 00:13:22.672 cpu : usr=0.16%, sys=1.37%, ctx=900, majf=0, minf=4097 00:13:22.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:13:22.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:22.672 issued rwts: total=3715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.672 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.672 00:13:22.672 Run status group 0 (all jobs): 00:13:22.672 READ: bw=2005MiB/s (2103MB/s), 91.6MiB/s-469MiB/s (96.1MB/s-491MB/s), io=19.9GiB (21.3GB), run=10010-10143msec 00:13:22.672 00:13:22.672 Disk stats (read/write): 00:13:22.672 nvme0n1: ios=8963/0, merge=0/0, ticks=1220911/0, in_queue=1220911, util=97.84% 00:13:22.672 nvme10n1: ios=20017/0, merge=0/0, ticks=1235292/0, in_queue=1235292, util=97.81% 00:13:22.672 nvme1n1: ios=7372/0, merge=0/0, ticks=1220503/0, in_queue=1220503, util=98.10% 00:13:22.672 nvme2n1: ios=23042/0, merge=0/0, ticks=1208790/0, in_queue=1208790, util=98.27% 00:13:22.672 nvme3n1: ios=7303/0, merge=0/0, ticks=1215678/0, in_queue=1215678, util=98.19% 00:13:22.672 nvme4n1: ios=14279/0, merge=0/0, ticks=1229731/0, in_queue=1229731, util=98.45% 00:13:22.672 nvme5n1: ios=7359/0, merge=0/0, ticks=1217904/0, in_queue=1217904, util=98.65% 00:13:22.672 nvme6n1: ios=36538/0, merge=0/0, ticks=1211898/0, in_queue=1211898, util=98.69% 00:13:22.672 nvme7n1: ios=7342/0, merge=0/0, ticks=1220806/0, in_queue=1220806, util=98.93% 00:13:22.672 nvme8n1: ios=20070/0, merge=0/0, ticks=1236265/0, in_queue=1236265, util=99.08% 00:13:22.672 nvme9n1: ios=7306/0, merge=0/0, ticks=1218531/0, in_queue=1218531, util=99.16% 00:13:22.672 18:22:19 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:22.672 [global] 00:13:22.672 thread=1 00:13:22.672 invalidate=1 00:13:22.672 rw=randwrite 00:13:22.672 time_based=1 00:13:22.672 runtime=10 00:13:22.672 ioengine=libaio 00:13:22.672 direct=1 00:13:22.672 bs=262144 00:13:22.672 iodepth=64 00:13:22.672 norandommap=1 00:13:22.672 numjobs=1 00:13:22.672 00:13:22.672 [job0] 00:13:22.672 filename=/dev/nvme0n1 00:13:22.672 [job1] 00:13:22.672 filename=/dev/nvme10n1 00:13:22.672 [job2] 00:13:22.672 filename=/dev/nvme1n1 00:13:22.672 [job3] 00:13:22.672 filename=/dev/nvme2n1 00:13:22.672 [job4] 00:13:22.672 filename=/dev/nvme3n1 00:13:22.672 [job5] 00:13:22.672 filename=/dev/nvme4n1 00:13:22.672 [job6] 00:13:22.672 filename=/dev/nvme5n1 00:13:22.672 [job7] 00:13:22.672 filename=/dev/nvme6n1 00:13:22.672 [job8] 00:13:22.672 filename=/dev/nvme7n1 00:13:22.672 [job9] 00:13:22.672 filename=/dev/nvme8n1 00:13:22.672 [job10] 00:13:22.672 filename=/dev/nvme9n1 00:13:22.672 Could not set queue depth (nvme0n1) 00:13:22.672 Could not set queue depth (nvme10n1) 00:13:22.672 Could not set queue depth (nvme1n1) 00:13:22.672 Could not set queue depth (nvme2n1) 00:13:22.672 Could not set queue depth (nvme3n1) 00:13:22.672 Could not set queue depth (nvme4n1) 00:13:22.672 Could not set queue depth (nvme5n1) 00:13:22.672 Could not set queue depth (nvme6n1) 00:13:22.672 Could not set queue depth (nvme7n1) 00:13:22.672 Could not set queue depth (nvme8n1) 00:13:22.672 Could not set queue depth (nvme9n1) 00:13:22.672 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:22.672 fio-3.35 00:13:22.672 Starting 11 threads 00:13:32.650 00:13:32.650 job0: (groupid=0, jobs=1): err= 0: pid=78637: Sun Nov 17 18:22:29 2024 00:13:32.650 write: IOPS=682, BW=171MiB/s (179MB/s)(1719MiB/10079msec); 0 zone resets 00:13:32.651 slat (usec): min=17, max=51982, avg=1449.47, stdev=2525.28 00:13:32.651 clat (msec): min=54, max=169, avg=92.35, stdev= 9.52 00:13:32.651 lat (msec): min=54, max=169, avg=93.80, stdev= 9.32 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:32.651 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:13:32.651 | 70.00th=[ 93], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 110], 00:13:32.651 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 167], 00:13:32.651 | 99.99th=[ 169] 00:13:32.651 bw ( KiB/s): min=112865, max=182272, per=11.90%, avg=174398.45, stdev=15226.02, samples=20 00:13:32.651 iops : min= 440, max= 712, avg=681.20, stdev=59.66, samples=20 00:13:32.651 lat (msec) : 100=94.62%, 250=5.38% 00:13:32.651 cpu : usr=1.18%, sys=1.91%, ctx=9231, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,6875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job1: (groupid=0, jobs=1): err= 0: pid=78638: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=380, BW=95.0MiB/s (99.7MB/s)(965MiB/10154msec); 0 zone resets 00:13:32.651 slat (usec): min=19, max=28633, avg=2548.15, stdev=4500.94 00:13:32.651 clat (msec): min=20, max=314, avg=165.74, stdev=22.43 00:13:32.651 lat (msec): min=20, max=314, avg=168.29, stdev=22.35 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 90], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 157], 00:13:32.651 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:32.651 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 197], 95.00th=[ 203], 00:13:32.651 | 99.00th=[ 211], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 317], 00:13:32.651 | 99.99th=[ 317] 00:13:32.651 bw ( KiB/s): min=81920, max=102400, per=6.64%, avg=97213.05, stdev=7451.45, samples=20 00:13:32.651 iops : min= 320, max= 400, avg=379.70, stdev=29.10, samples=20 00:13:32.651 lat (msec) : 50=0.52%, 100=0.80%, 250=98.01%, 500=0.67% 00:13:32.651 cpu : usr=0.67%, sys=0.96%, ctx=4621, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,3860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job2: (groupid=0, jobs=1): err= 0: pid=78650: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=610, BW=153MiB/s (160MB/s)(1538MiB/10079msec); 0 zone resets 00:13:32.651 slat (usec): min=17, max=23329, avg=1572.75, stdev=2951.50 00:13:32.651 clat (msec): min=2, max=208, avg=103.24, stdev=34.93 00:13:32.651 lat (msec): min=4, max=208, avg=104.81, stdev=35.37 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 32], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:32.651 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 93], 00:13:32.651 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 182], 95.00th=[ 194], 00:13:32.651 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 209], 99.95th=[ 209], 00:13:32.651 | 99.99th=[ 209] 00:13:32.651 bw ( KiB/s): min=83968, max=182784, per=10.64%, avg=155878.40, stdev=36786.25, samples=20 00:13:32.651 iops : min= 328, max= 714, avg=608.90, stdev=143.70, samples=20 00:13:32.651 lat (msec) : 4=0.02%, 10=0.13%, 20=0.41%, 50=1.37%, 100=80.04% 00:13:32.651 lat (msec) : 250=18.04% 00:13:32.651 cpu : usr=0.97%, sys=1.57%, ctx=7231, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,6152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job3: (groupid=0, jobs=1): err= 0: pid=78651: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=381, BW=95.4MiB/s (100.0MB/s)(968MiB/10152msec); 0 zone resets 00:13:32.651 slat (usec): min=16, max=48734, avg=2579.08, stdev=4564.10 00:13:32.651 clat (msec): min=51, max=312, avg=165.15, stdev=23.22 00:13:32.651 lat (msec): min=51, max=312, avg=167.73, stdev=23.13 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 96], 5.00th=[ 132], 10.00th=[ 150], 20.00th=[ 155], 00:13:32.651 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:32.651 | 70.00th=[ 165], 80.00th=[ 180], 90.00th=[ 201], 95.00th=[ 207], 00:13:32.651 | 99.00th=[ 215], 99.50th=[ 259], 99.90th=[ 305], 99.95th=[ 313], 00:13:32.651 | 99.99th=[ 313] 00:13:32.651 bw ( KiB/s): min=79872, max=118784, per=6.66%, avg=97510.40, stdev=9479.43, samples=20 00:13:32.651 iops : min= 312, max= 464, avg=380.90, stdev=37.03, samples=20 00:13:32.651 lat (msec) : 100=1.03%, 250=98.40%, 500=0.57% 00:13:32.651 cpu : usr=0.67%, sys=0.97%, ctx=3913, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,3872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job4: (groupid=0, jobs=1): err= 0: pid=78652: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=477, BW=119MiB/s (125MB/s)(1206MiB/10110msec); 0 zone resets 00:13:32.651 slat (usec): min=17, max=70704, avg=2068.34, stdev=3648.98 00:13:32.651 clat (msec): min=76, max=239, avg=132.03, stdev=11.97 00:13:32.651 lat (msec): min=76, max=239, avg=134.10, stdev=11.58 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 121], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 126], 00:13:32.651 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 132], 60.00th=[ 132], 00:13:32.651 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 159], 00:13:32.651 | 99.00th=[ 182], 99.50th=[ 209], 99.90th=[ 230], 99.95th=[ 230], 00:13:32.651 | 99.99th=[ 241] 00:13:32.651 bw ( KiB/s): min=82432, max=126976, per=8.32%, avg=121856.00, stdev=9654.68, samples=20 00:13:32.651 iops : min= 322, max= 496, avg=476.00, stdev=37.71, samples=20 00:13:32.651 lat (msec) : 100=0.25%, 250=99.75% 00:13:32.651 cpu : usr=0.85%, sys=1.44%, ctx=5851, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,4823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job5: (groupid=0, jobs=1): err= 0: pid=78653: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=478, BW=120MiB/s (125MB/s)(1210MiB/10113msec); 0 zone resets 00:13:32.651 slat (usec): min=17, max=35320, avg=2061.48, stdev=3556.45 00:13:32.651 clat (msec): min=40, max=241, avg=131.58, stdev=12.16 00:13:32.651 lat (msec): min=40, max=241, avg=133.64, stdev=11.81 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 126], 00:13:32.651 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:13:32.651 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 159], 00:13:32.651 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 234], 99.95th=[ 234], 00:13:32.651 | 99.99th=[ 243] 00:13:32.651 bw ( KiB/s): min=92160, max=126976, per=8.35%, avg=122316.80, stdev=7729.07, samples=20 00:13:32.651 iops : min= 360, max= 496, avg=477.80, stdev=30.19, samples=20 00:13:32.651 lat (msec) : 50=0.17%, 100=0.41%, 250=99.42% 00:13:32.651 cpu : usr=0.72%, sys=1.28%, ctx=5462, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,4841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job6: (groupid=0, jobs=1): err= 0: pid=78654: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=377, BW=94.4MiB/s (99.0MB/s)(958MiB/10147msec); 0 zone resets 00:13:32.651 slat (usec): min=19, max=90326, avg=2603.57, stdev=4770.61 00:13:32.651 clat (msec): min=92, max=311, avg=166.79, stdev=22.95 00:13:32.651 lat (msec): min=92, max=311, avg=169.40, stdev=22.80 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 120], 5.00th=[ 133], 10.00th=[ 150], 20.00th=[ 155], 00:13:32.651 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:32.651 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 205], 95.00th=[ 213], 00:13:32.651 | 99.00th=[ 224], 99.50th=[ 259], 99.90th=[ 300], 99.95th=[ 313], 00:13:32.651 | 99.99th=[ 313] 00:13:32.651 bw ( KiB/s): min=79872, max=108544, per=6.59%, avg=96486.40, stdev=8981.69, samples=20 00:13:32.651 iops : min= 312, max= 424, avg=376.90, stdev=35.08, samples=20 00:13:32.651 lat (msec) : 100=0.26%, 250=99.16%, 500=0.57% 00:13:32.651 cpu : usr=0.76%, sys=1.12%, ctx=3944, majf=0, minf=1 00:13:32.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:32.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.651 issued rwts: total=0,3832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.651 job7: (groupid=0, jobs=1): err= 0: pid=78655: Sun Nov 17 18:22:29 2024 00:13:32.651 write: IOPS=479, BW=120MiB/s (126MB/s)(1213MiB/10116msec); 0 zone resets 00:13:32.651 slat (usec): min=18, max=18840, avg=2055.12, stdev=3521.85 00:13:32.651 clat (msec): min=10, max=249, avg=131.31, stdev=13.75 00:13:32.651 lat (msec): min=10, max=249, avg=133.36, stdev=13.49 00:13:32.651 clat percentiles (msec): 00:13:32.651 | 1.00th=[ 111], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 126], 00:13:32.651 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:13:32.652 | 70.00th=[ 133], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 157], 00:13:32.652 | 99.00th=[ 169], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 241], 00:13:32.652 | 99.99th=[ 249] 00:13:32.652 bw ( KiB/s): min=99840, max=126976, per=8.37%, avg=122611.50, stdev=6220.80, samples=20 00:13:32.652 iops : min= 390, max= 496, avg=478.95, stdev=24.30, samples=20 00:13:32.652 lat (msec) : 20=0.14%, 50=0.25%, 100=0.49%, 250=99.11% 00:13:32.652 cpu : usr=0.77%, sys=1.42%, ctx=6934, majf=0, minf=1 00:13:32.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:32.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.652 issued rwts: total=0,4853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.652 job8: (groupid=0, jobs=1): err= 0: pid=78656: Sun Nov 17 18:22:29 2024 00:13:32.652 write: IOPS=812, BW=203MiB/s (213MB/s)(2049MiB/10082msec); 0 zone resets 00:13:32.652 slat (usec): min=16, max=9550, avg=1215.22, stdev=2112.43 00:13:32.652 clat (msec): min=10, max=169, avg=77.48, stdev=17.90 00:13:32.652 lat (msec): min=10, max=169, avg=78.70, stdev=18.07 00:13:32.652 clat percentiles (msec): 00:13:32.652 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:13:32.652 | 30.00th=[ 58], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 90], 00:13:32.652 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 95], 00:13:32.652 | 99.00th=[ 99], 99.50th=[ 112], 99.90th=[ 159], 99.95th=[ 165], 00:13:32.652 | 99.99th=[ 169] 00:13:32.652 bw ( KiB/s): min=174592, max=288768, per=14.21%, avg=208204.80, stdev=48446.67, samples=20 00:13:32.652 iops : min= 682, max= 1128, avg=813.30, stdev=189.24, samples=20 00:13:32.652 lat (msec) : 20=0.15%, 50=0.49%, 100=98.78%, 250=0.59% 00:13:32.652 cpu : usr=1.58%, sys=1.99%, ctx=9914, majf=0, minf=1 00:13:32.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:32.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.652 issued rwts: total=0,8196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.652 job9: (groupid=0, jobs=1): err= 0: pid=78657: Sun Nov 17 18:22:29 2024 00:13:32.652 write: IOPS=684, BW=171MiB/s (179MB/s)(1725MiB/10079msec); 0 zone resets 00:13:32.652 slat (usec): min=17, max=15419, avg=1443.59, stdev=2460.22 00:13:32.652 clat (msec): min=20, max=170, avg=92.02, stdev= 9.43 00:13:32.652 lat (msec): min=20, max=170, avg=93.47, stdev= 9.24 00:13:32.652 clat percentiles (msec): 00:13:32.652 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:32.652 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:13:32.652 | 70.00th=[ 93], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 112], 00:13:32.652 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 159], 99.95th=[ 165], 00:13:32.652 | 99.99th=[ 171] 00:13:32.652 bw ( KiB/s): min=126976, max=182784, per=11.95%, avg=175001.60, stdev=12276.32, samples=20 00:13:32.652 iops : min= 496, max= 714, avg=683.60, stdev=47.95, samples=20 00:13:32.652 lat (msec) : 50=0.35%, 100=94.10%, 250=5.55% 00:13:32.652 cpu : usr=1.28%, sys=1.82%, ctx=7740, majf=0, minf=1 00:13:32.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:32.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.652 issued rwts: total=0,6899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.652 job10: (groupid=0, jobs=1): err= 0: pid=78658: Sun Nov 17 18:22:29 2024 00:13:32.652 write: IOPS=384, BW=96.2MiB/s (101MB/s)(976MiB/10152msec); 0 zone resets 00:13:32.652 slat (usec): min=17, max=59641, avg=2508.98, stdev=4521.32 00:13:32.652 clat (msec): min=16, max=314, avg=163.79, stdev=25.41 00:13:32.652 lat (msec): min=16, max=315, avg=166.30, stdev=25.41 00:13:32.652 clat percentiles (msec): 00:13:32.652 | 1.00th=[ 51], 5.00th=[ 131], 10.00th=[ 150], 20.00th=[ 155], 00:13:32.652 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:32.652 | 70.00th=[ 165], 80.00th=[ 171], 90.00th=[ 197], 95.00th=[ 205], 00:13:32.652 | 99.00th=[ 211], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 317], 00:13:32.652 | 99.99th=[ 317] 00:13:32.652 bw ( KiB/s): min=81920, max=125178, per=6.71%, avg=98367.70, stdev=9609.35, samples=20 00:13:32.652 iops : min= 320, max= 488, avg=384.20, stdev=37.39, samples=20 00:13:32.652 lat (msec) : 20=0.10%, 50=0.82%, 100=0.61%, 250=97.80%, 500=0.67% 00:13:32.652 cpu : usr=0.66%, sys=1.25%, ctx=3273, majf=0, minf=1 00:13:32.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:32.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:32.652 issued rwts: total=0,3905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.652 00:13:32.652 Run status group 0 (all jobs): 00:13:32.652 WRITE: bw=1431MiB/s (1500MB/s), 94.4MiB/s-203MiB/s (99.0MB/s-213MB/s), io=14.2GiB (15.2GB), run=10079-10154msec 00:13:32.652 00:13:32.652 Disk stats (read/write): 00:13:32.652 nvme0n1: ios=49/13557, merge=0/0, ticks=53/1210564, in_queue=1210617, util=97.59% 00:13:32.652 nvme10n1: ios=49/7558, merge=0/0, ticks=81/1208509, in_queue=1208590, util=97.86% 00:13:32.652 nvme1n1: ios=30/12120, merge=0/0, ticks=103/1212553, in_queue=1212656, util=97.95% 00:13:32.652 nvme2n1: ios=13/7578, merge=0/0, ticks=8/1206818, in_queue=1206826, util=97.84% 00:13:32.652 nvme3n1: ios=0/9469, merge=0/0, ticks=0/1208509, in_queue=1208509, util=97.86% 00:13:32.652 nvme4n1: ios=0/9509, merge=0/0, ticks=0/1209477, in_queue=1209477, util=98.23% 00:13:32.652 nvme5n1: ios=0/7496, merge=0/0, ticks=0/1205438, in_queue=1205438, util=98.25% 00:13:32.652 nvme6n1: ios=0/9551, merge=0/0, ticks=0/1211499, in_queue=1211499, util=98.60% 00:13:32.652 nvme7n1: ios=0/16202, merge=0/0, ticks=0/1211014, in_queue=1211014, util=98.73% 00:13:32.652 nvme8n1: ios=0/13615, merge=0/0, ticks=0/1211508, in_queue=1211508, util=98.86% 00:13:32.652 nvme9n1: ios=0/7648, merge=0/0, ticks=0/1208003, in_queue=1208003, util=98.91% 00:13:32.652 18:22:29 -- target/multiconnection.sh@36 -- # sync 00:13:32.652 18:22:29 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:32.652 18:22:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.652 18:22:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.652 18:22:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:32.652 18:22:29 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.652 18:22:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.652 18:22:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:13:32.652 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.652 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.652 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.652 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.652 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.652 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:32.652 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:32.652 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:32.652 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.652 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:13:32.652 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.652 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:32.652 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.652 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.652 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.652 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:32.652 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:32.652 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:32.652 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.652 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:13:32.652 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.652 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:32.652 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.652 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.652 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.652 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:32.652 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:32.652 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:32.652 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.652 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.652 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:13:32.652 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.652 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:32.652 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.652 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.652 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:32.653 18:22:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:32.653 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:32.653 18:22:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:32.653 18:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:32.653 18:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:13:32.653 18:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:13:32.653 18:22:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:32.653 18:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.653 18:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 18:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.653 18:22:30 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:32.653 18:22:30 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:32.653 18:22:30 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:32.653 18:22:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:32.653 18:22:30 -- nvmf/common.sh@116 -- # sync 00:13:32.653 18:22:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:32.653 18:22:30 -- nvmf/common.sh@119 -- # set +e 00:13:32.653 18:22:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:32.653 18:22:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:32.653 rmmod nvme_tcp 00:13:32.653 rmmod nvme_fabrics 00:13:32.912 rmmod nvme_keyring 00:13:32.912 18:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:32.912 18:22:30 -- nvmf/common.sh@123 -- # set -e 00:13:32.912 18:22:30 -- nvmf/common.sh@124 -- # return 0 00:13:32.912 18:22:30 -- nvmf/common.sh@477 -- # '[' -n 77961 ']' 00:13:32.912 18:22:30 -- nvmf/common.sh@478 -- # killprocess 77961 00:13:32.912 18:22:30 -- common/autotest_common.sh@936 -- # '[' -z 77961 ']' 00:13:32.912 18:22:30 -- common/autotest_common.sh@940 -- # kill -0 77961 00:13:32.912 18:22:30 -- common/autotest_common.sh@941 -- # uname 00:13:32.912 18:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:32.912 18:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77961 00:13:32.912 killing process with pid 77961 00:13:32.912 18:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:32.912 18:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:32.912 18:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77961' 00:13:32.912 18:22:30 -- common/autotest_common.sh@955 -- # kill 77961 00:13:32.912 18:22:30 -- common/autotest_common.sh@960 -- # wait 77961 00:13:33.171 18:22:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:33.171 18:22:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:33.171 18:22:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:33.171 18:22:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.171 18:22:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:33.171 18:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.171 18:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.171 18:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.171 18:22:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:33.171 00:13:33.171 real 0m49.012s 00:13:33.171 user 2m37.792s 00:13:33.171 sys 0m37.180s 00:13:33.171 18:22:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:33.171 ************************************ 00:13:33.171 END TEST nvmf_multiconnection 00:13:33.171 ************************************ 00:13:33.171 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.171 18:22:31 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:33.171 18:22:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:33.171 18:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:33.171 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.171 ************************************ 00:13:33.171 START TEST nvmf_initiator_timeout 00:13:33.171 ************************************ 00:13:33.171 18:22:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:33.430 * Looking for test storage... 00:13:33.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.430 18:22:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:33.430 18:22:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:33.430 18:22:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:33.430 18:22:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:33.430 18:22:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:33.430 18:22:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:33.430 18:22:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:33.430 18:22:31 -- scripts/common.sh@335 -- # IFS=.-: 00:13:33.430 18:22:31 -- scripts/common.sh@335 -- # read -ra ver1 00:13:33.430 18:22:31 -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.430 18:22:31 -- scripts/common.sh@336 -- # read -ra ver2 00:13:33.430 18:22:31 -- scripts/common.sh@337 -- # local 'op=<' 00:13:33.430 18:22:31 -- scripts/common.sh@339 -- # ver1_l=2 00:13:33.430 18:22:31 -- scripts/common.sh@340 -- # ver2_l=1 00:13:33.430 18:22:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:33.430 18:22:31 -- scripts/common.sh@343 -- # case "$op" in 00:13:33.430 18:22:31 -- scripts/common.sh@344 -- # : 1 00:13:33.430 18:22:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:33.430 18:22:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.430 18:22:31 -- scripts/common.sh@364 -- # decimal 1 00:13:33.430 18:22:31 -- scripts/common.sh@352 -- # local d=1 00:13:33.430 18:22:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.430 18:22:31 -- scripts/common.sh@354 -- # echo 1 00:13:33.430 18:22:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:33.430 18:22:31 -- scripts/common.sh@365 -- # decimal 2 00:13:33.430 18:22:31 -- scripts/common.sh@352 -- # local d=2 00:13:33.430 18:22:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.430 18:22:31 -- scripts/common.sh@354 -- # echo 2 00:13:33.430 18:22:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:33.430 18:22:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:33.430 18:22:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:33.430 18:22:31 -- scripts/common.sh@367 -- # return 0 00:13:33.430 18:22:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.430 18:22:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:33.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.430 --rc genhtml_branch_coverage=1 00:13:33.430 --rc genhtml_function_coverage=1 00:13:33.430 --rc genhtml_legend=1 00:13:33.430 --rc geninfo_all_blocks=1 00:13:33.430 --rc geninfo_unexecuted_blocks=1 00:13:33.430 00:13:33.430 ' 00:13:33.430 18:22:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:33.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.430 --rc genhtml_branch_coverage=1 00:13:33.430 --rc genhtml_function_coverage=1 00:13:33.430 --rc genhtml_legend=1 00:13:33.430 --rc geninfo_all_blocks=1 00:13:33.430 --rc geninfo_unexecuted_blocks=1 00:13:33.430 00:13:33.430 ' 00:13:33.430 18:22:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:33.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.430 --rc genhtml_branch_coverage=1 00:13:33.430 --rc genhtml_function_coverage=1 00:13:33.430 --rc genhtml_legend=1 00:13:33.430 --rc geninfo_all_blocks=1 00:13:33.430 --rc geninfo_unexecuted_blocks=1 00:13:33.430 00:13:33.430 ' 00:13:33.430 18:22:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:33.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.430 --rc genhtml_branch_coverage=1 00:13:33.430 --rc genhtml_function_coverage=1 00:13:33.430 --rc genhtml_legend=1 00:13:33.430 --rc geninfo_all_blocks=1 00:13:33.430 --rc geninfo_unexecuted_blocks=1 00:13:33.430 00:13:33.430 ' 00:13:33.430 18:22:31 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:33.430 18:22:31 -- nvmf/common.sh@7 -- # uname -s 00:13:33.430 18:22:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.430 18:22:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.430 18:22:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.430 18:22:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.430 18:22:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.430 18:22:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.430 18:22:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.430 18:22:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.430 18:22:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.430 18:22:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.430 18:22:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:13:33.430 18:22:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:13:33.430 18:22:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.430 18:22:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.430 18:22:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:33.430 18:22:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.430 18:22:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.430 18:22:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.430 18:22:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.430 18:22:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 18:22:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 18:22:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 18:22:31 -- paths/export.sh@5 -- # export PATH 00:13:33.430 18:22:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.430 18:22:31 -- nvmf/common.sh@46 -- # : 0 00:13:33.430 18:22:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:33.430 18:22:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:33.430 18:22:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:33.430 18:22:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.430 18:22:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.430 18:22:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:33.430 18:22:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:33.430 18:22:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:33.430 18:22:31 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.430 18:22:31 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.430 18:22:31 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:33.430 18:22:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:33.430 18:22:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.430 18:22:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:33.430 18:22:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:33.430 18:22:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:33.430 18:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.430 18:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.430 18:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.430 18:22:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:33.430 18:22:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:33.430 18:22:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:33.430 18:22:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:33.430 18:22:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:33.430 18:22:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:33.430 18:22:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.430 18:22:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.430 18:22:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:33.430 18:22:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:33.430 18:22:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:33.430 18:22:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:33.430 18:22:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:33.430 18:22:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.430 18:22:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:33.430 18:22:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:33.431 18:22:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:33.431 18:22:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:33.431 18:22:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:33.431 18:22:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:33.431 Cannot find device "nvmf_tgt_br" 00:13:33.431 18:22:31 -- nvmf/common.sh@154 -- # true 00:13:33.431 18:22:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.431 Cannot find device "nvmf_tgt_br2" 00:13:33.431 18:22:31 -- nvmf/common.sh@155 -- # true 00:13:33.431 18:22:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:33.431 18:22:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:33.431 Cannot find device "nvmf_tgt_br" 00:13:33.431 18:22:31 -- nvmf/common.sh@157 -- # true 00:13:33.431 18:22:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:33.431 Cannot find device "nvmf_tgt_br2" 00:13:33.431 18:22:31 -- nvmf/common.sh@158 -- # true 00:13:33.431 18:22:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:33.689 18:22:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:33.689 18:22:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:33.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.689 18:22:31 -- nvmf/common.sh@161 -- # true 00:13:33.689 18:22:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.689 18:22:31 -- nvmf/common.sh@162 -- # true 00:13:33.689 18:22:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:33.689 18:22:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:33.689 18:22:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:33.689 18:22:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:33.689 18:22:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:33.689 18:22:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:33.689 18:22:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:33.689 18:22:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:33.689 18:22:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:33.689 18:22:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:33.689 18:22:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:33.689 18:22:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:33.689 18:22:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:33.689 18:22:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:33.689 18:22:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:33.689 18:22:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:33.689 18:22:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:33.689 18:22:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:33.689 18:22:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:33.689 18:22:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:33.689 18:22:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:33.689 18:22:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:33.689 18:22:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:33.689 18:22:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:33.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:33.689 00:13:33.689 --- 10.0.0.2 ping statistics --- 00:13:33.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.690 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:33.690 18:22:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:33.690 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:33.690 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:33.690 00:13:33.690 --- 10.0.0.3 ping statistics --- 00:13:33.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.690 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:33.690 18:22:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:33.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:33.690 00:13:33.690 --- 10.0.0.1 ping statistics --- 00:13:33.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.690 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:33.690 18:22:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.690 18:22:31 -- nvmf/common.sh@421 -- # return 0 00:13:33.690 18:22:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:33.690 18:22:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.690 18:22:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:33.690 18:22:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:33.690 18:22:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.690 18:22:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:33.690 18:22:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:33.690 18:22:31 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:33.690 18:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:33.690 18:22:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.690 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.690 18:22:31 -- nvmf/common.sh@469 -- # nvmfpid=79030 00:13:33.690 18:22:31 -- nvmf/common.sh@470 -- # waitforlisten 79030 00:13:33.690 18:22:31 -- common/autotest_common.sh@829 -- # '[' -z 79030 ']' 00:13:33.690 18:22:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.690 18:22:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.690 18:22:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.690 18:22:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.690 18:22:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.690 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:33.948 [2024-11-17 18:22:31.975777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:33.948 [2024-11-17 18:22:31.976396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.948 [2024-11-17 18:22:32.112011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.948 [2024-11-17 18:22:32.150301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:33.948 [2024-11-17 18:22:32.150713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.948 [2024-11-17 18:22:32.150925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.948 [2024-11-17 18:22:32.151077] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.948 [2024-11-17 18:22:32.151344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.948 [2024-11-17 18:22:32.151478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.948 [2024-11-17 18:22:32.151549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.948 [2024-11-17 18:22:32.151550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.206 18:22:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.206 18:22:32 -- common/autotest_common.sh@862 -- # return 0 00:13:34.206 18:22:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:34.206 18:22:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 18:22:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:34.206 18:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 Malloc0 00:13:34.206 18:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:34.206 18:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 Delay0 00:13:34.206 18:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.206 18:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 [2024-11-17 18:22:32.318872] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.206 18:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.206 18:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 18:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.206 18:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 18:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.206 18:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.206 18:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.206 [2024-11-17 18:22:32.346999] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.206 18:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.206 18:22:32 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.464 18:22:32 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.464 18:22:32 -- common/autotest_common.sh@1187 -- # local i=0 00:13:34.464 18:22:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.464 18:22:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:34.464 18:22:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:36.364 18:22:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:36.364 18:22:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:36.364 18:22:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.364 18:22:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:36.364 18:22:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.364 18:22:34 -- common/autotest_common.sh@1197 -- # return 0 00:13:36.364 18:22:34 -- target/initiator_timeout.sh@35 -- # fio_pid=79087 00:13:36.364 18:22:34 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:36.364 18:22:34 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:36.364 [global] 00:13:36.364 thread=1 00:13:36.364 invalidate=1 00:13:36.364 rw=write 00:13:36.364 time_based=1 00:13:36.364 runtime=60 00:13:36.364 ioengine=libaio 00:13:36.364 direct=1 00:13:36.364 bs=4096 00:13:36.364 iodepth=1 00:13:36.364 norandommap=0 00:13:36.364 numjobs=1 00:13:36.364 00:13:36.364 verify_dump=1 00:13:36.364 verify_backlog=512 00:13:36.364 verify_state_save=0 00:13:36.364 do_verify=1 00:13:36.364 verify=crc32c-intel 00:13:36.364 [job0] 00:13:36.364 filename=/dev/nvme0n1 00:13:36.364 Could not set queue depth (nvme0n1) 00:13:36.623 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.623 fio-3.35 00:13:36.623 Starting 1 thread 00:13:39.908 18:22:37 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:39.908 18:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.908 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:39.908 true 00:13:39.908 18:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.908 18:22:37 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:39.908 18:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.908 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:39.908 true 00:13:39.908 18:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.908 18:22:37 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:39.908 18:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.908 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:39.908 true 00:13:39.908 18:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.908 18:22:37 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:39.908 18:22:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.908 18:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:39.908 true 00:13:39.908 18:22:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.908 18:22:37 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:42.435 18:22:40 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:42.435 18:22:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.435 18:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:42.435 true 00:13:42.435 18:22:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.435 18:22:40 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:42.435 18:22:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.435 18:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:42.435 true 00:13:42.435 18:22:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.435 18:22:40 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:42.435 18:22:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.435 18:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:42.435 true 00:13:42.435 18:22:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.435 18:22:40 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:42.435 18:22:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.435 18:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:42.435 true 00:13:42.435 18:22:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.435 18:22:40 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:42.435 18:22:40 -- target/initiator_timeout.sh@54 -- # wait 79087 00:14:38.744 00:14:38.744 job0: (groupid=0, jobs=1): err= 0: pid=79108: Sun Nov 17 18:23:34 2024 00:14:38.744 read: IOPS=806, BW=3225KiB/s (3302kB/s)(189MiB/60000msec) 00:14:38.744 slat (usec): min=10, max=173, avg=13.97, stdev= 4.73 00:14:38.744 clat (usec): min=46, max=2874, avg=200.52, stdev=28.73 00:14:38.744 lat (usec): min=162, max=2898, avg=214.49, stdev=29.69 00:14:38.744 clat percentiles (usec): 00:14:38.744 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:14:38.744 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:14:38.744 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 241], 00:14:38.744 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 412], 00:14:38.744 | 99.99th=[ 799] 00:14:38.744 write: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec); 0 zone resets 00:14:38.744 slat (usec): min=13, max=11772, avg=22.08, stdev=72.05 00:14:38.744 clat (usec): min=114, max=40780k, avg=994.73, stdev=184906.01 00:14:38.744 lat (usec): min=132, max=40780k, avg=1016.81, stdev=184906.00 00:14:38.744 clat percentiles (usec): 00:14:38.744 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 00:14:38.744 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 159], 00:14:38.744 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 194], 00:14:38.744 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 251], 99.95th=[ 273], 00:14:38.744 | 99.99th=[ 545] 00:14:38.744 bw ( KiB/s): min= 5472, max=12288, per=100.00%, avg=10023.84, stdev=1316.20, samples=38 00:14:38.744 iops : min= 1368, max= 3072, avg=2505.95, stdev=329.04, samples=38 00:14:38.744 lat (usec) : 50=0.01%, 250=98.72%, 500=1.26%, 750=0.02%, 1000=0.01% 00:14:38.744 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:14:38.744 cpu : usr=0.59%, sys=2.21%, ctx=97019, majf=0, minf=5 00:14:38.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.744 issued rwts: total=48373,48640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.744 00:14:38.745 Run status group 0 (all jobs): 00:14:38.745 READ: bw=3225KiB/s (3302kB/s), 3225KiB/s-3225KiB/s (3302kB/s-3302kB/s), io=189MiB (198MB), run=60000-60000msec 00:14:38.745 WRITE: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:14:38.745 00:14:38.745 Disk stats (read/write): 00:14:38.745 nvme0n1: ios=48384/48388, merge=0/0, ticks=9962/8095, in_queue=18057, util=99.68% 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.745 18:23:34 -- common/autotest_common.sh@1208 -- # local i=0 00:14:38.745 18:23:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:38.745 18:23:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.745 18:23:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:38.745 18:23:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.745 18:23:34 -- common/autotest_common.sh@1220 -- # return 0 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:38.745 nvmf hotplug test: fio successful as expected 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.745 18:23:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.745 18:23:34 -- common/autotest_common.sh@10 -- # set +x 00:14:38.745 18:23:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:38.745 18:23:34 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:38.745 18:23:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:38.745 18:23:34 -- nvmf/common.sh@116 -- # sync 00:14:38.745 18:23:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:38.745 18:23:34 -- nvmf/common.sh@119 -- # set +e 00:14:38.745 18:23:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:38.745 18:23:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:38.745 rmmod nvme_tcp 00:14:38.745 rmmod nvme_fabrics 00:14:38.745 rmmod nvme_keyring 00:14:38.745 18:23:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:38.745 18:23:34 -- nvmf/common.sh@123 -- # set -e 00:14:38.745 18:23:34 -- nvmf/common.sh@124 -- # return 0 00:14:38.745 18:23:34 -- nvmf/common.sh@477 -- # '[' -n 79030 ']' 00:14:38.745 18:23:34 -- nvmf/common.sh@478 -- # killprocess 79030 00:14:38.745 18:23:34 -- common/autotest_common.sh@936 -- # '[' -z 79030 ']' 00:14:38.745 18:23:34 -- common/autotest_common.sh@940 -- # kill -0 79030 00:14:38.745 18:23:34 -- common/autotest_common.sh@941 -- # uname 00:14:38.745 18:23:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:38.745 18:23:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79030 00:14:38.745 killing process with pid 79030 00:14:38.745 18:23:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:38.745 18:23:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:38.745 18:23:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79030' 00:14:38.745 18:23:34 -- common/autotest_common.sh@955 -- # kill 79030 00:14:38.745 18:23:34 -- common/autotest_common.sh@960 -- # wait 79030 00:14:38.745 18:23:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:38.745 18:23:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:38.745 18:23:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:38.745 18:23:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.745 18:23:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:38.745 18:23:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.745 18:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.745 18:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.745 18:23:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:38.745 00:14:38.745 real 1m3.783s 00:14:38.745 user 3m50.958s 00:14:38.745 sys 0m20.948s 00:14:38.745 ************************************ 00:14:38.745 END TEST nvmf_initiator_timeout 00:14:38.745 ************************************ 00:14:38.745 18:23:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.745 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.745 18:23:35 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:38.745 18:23:35 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:38.745 18:23:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.745 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.745 18:23:35 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:38.745 18:23:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.745 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.745 18:23:35 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:38.745 18:23:35 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:38.745 18:23:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:38.745 18:23:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.745 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.745 ************************************ 00:14:38.745 START TEST nvmf_identify 00:14:38.745 ************************************ 00:14:38.745 18:23:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:38.745 * Looking for test storage... 00:14:38.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:38.745 18:23:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:38.745 18:23:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:38.745 18:23:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:38.745 18:23:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:38.745 18:23:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:38.745 18:23:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:38.745 18:23:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:38.745 18:23:35 -- scripts/common.sh@335 -- # IFS=.-: 00:14:38.745 18:23:35 -- scripts/common.sh@335 -- # read -ra ver1 00:14:38.745 18:23:35 -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.745 18:23:35 -- scripts/common.sh@336 -- # read -ra ver2 00:14:38.745 18:23:35 -- scripts/common.sh@337 -- # local 'op=<' 00:14:38.745 18:23:35 -- scripts/common.sh@339 -- # ver1_l=2 00:14:38.745 18:23:35 -- scripts/common.sh@340 -- # ver2_l=1 00:14:38.745 18:23:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:38.745 18:23:35 -- scripts/common.sh@343 -- # case "$op" in 00:14:38.745 18:23:35 -- scripts/common.sh@344 -- # : 1 00:14:38.745 18:23:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:38.745 18:23:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.745 18:23:35 -- scripts/common.sh@364 -- # decimal 1 00:14:38.745 18:23:35 -- scripts/common.sh@352 -- # local d=1 00:14:38.745 18:23:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.745 18:23:35 -- scripts/common.sh@354 -- # echo 1 00:14:38.745 18:23:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:38.745 18:23:35 -- scripts/common.sh@365 -- # decimal 2 00:14:38.745 18:23:35 -- scripts/common.sh@352 -- # local d=2 00:14:38.745 18:23:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.745 18:23:35 -- scripts/common.sh@354 -- # echo 2 00:14:38.745 18:23:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:38.745 18:23:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:38.745 18:23:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:38.745 18:23:35 -- scripts/common.sh@367 -- # return 0 00:14:38.745 18:23:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.745 18:23:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.745 --rc genhtml_branch_coverage=1 00:14:38.745 --rc genhtml_function_coverage=1 00:14:38.745 --rc genhtml_legend=1 00:14:38.745 --rc geninfo_all_blocks=1 00:14:38.745 --rc geninfo_unexecuted_blocks=1 00:14:38.745 00:14:38.745 ' 00:14:38.745 18:23:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.745 --rc genhtml_branch_coverage=1 00:14:38.745 --rc genhtml_function_coverage=1 00:14:38.745 --rc genhtml_legend=1 00:14:38.745 --rc geninfo_all_blocks=1 00:14:38.745 --rc geninfo_unexecuted_blocks=1 00:14:38.745 00:14:38.745 ' 00:14:38.745 18:23:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.745 --rc genhtml_branch_coverage=1 00:14:38.745 --rc genhtml_function_coverage=1 00:14:38.745 --rc genhtml_legend=1 00:14:38.745 --rc geninfo_all_blocks=1 00:14:38.745 --rc geninfo_unexecuted_blocks=1 00:14:38.745 00:14:38.745 ' 00:14:38.745 18:23:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.745 --rc genhtml_branch_coverage=1 00:14:38.745 --rc genhtml_function_coverage=1 00:14:38.745 --rc genhtml_legend=1 00:14:38.745 --rc geninfo_all_blocks=1 00:14:38.745 --rc geninfo_unexecuted_blocks=1 00:14:38.745 00:14:38.745 ' 00:14:38.745 18:23:35 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.745 18:23:35 -- nvmf/common.sh@7 -- # uname -s 00:14:38.745 18:23:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.745 18:23:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.745 18:23:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.746 18:23:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.746 18:23:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.746 18:23:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.746 18:23:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.746 18:23:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.746 18:23:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.746 18:23:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.746 18:23:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:14:38.746 18:23:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:14:38.746 18:23:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.746 18:23:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.746 18:23:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.746 18:23:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.746 18:23:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.746 18:23:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.746 18:23:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.746 18:23:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.746 18:23:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.746 18:23:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.746 18:23:35 -- paths/export.sh@5 -- # export PATH 00:14:38.746 18:23:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.746 18:23:35 -- nvmf/common.sh@46 -- # : 0 00:14:38.746 18:23:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:38.746 18:23:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:38.746 18:23:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:38.746 18:23:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.746 18:23:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.746 18:23:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:38.746 18:23:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:38.746 18:23:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:38.746 18:23:35 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.746 18:23:35 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.746 18:23:35 -- host/identify.sh@14 -- # nvmftestinit 00:14:38.746 18:23:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:38.746 18:23:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.746 18:23:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:38.746 18:23:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:38.746 18:23:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:38.746 18:23:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.746 18:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.746 18:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.746 18:23:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:38.746 18:23:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:38.746 18:23:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:38.746 18:23:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:38.746 18:23:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:38.746 18:23:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:38.746 18:23:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.746 18:23:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.746 18:23:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:38.746 18:23:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:38.746 18:23:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.746 18:23:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.746 18:23:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.746 18:23:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.746 18:23:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.746 18:23:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.746 18:23:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.746 18:23:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.746 18:23:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:38.746 18:23:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:38.746 Cannot find device "nvmf_tgt_br" 00:14:38.746 18:23:35 -- nvmf/common.sh@154 -- # true 00:14:38.746 18:23:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.746 Cannot find device "nvmf_tgt_br2" 00:14:38.746 18:23:35 -- nvmf/common.sh@155 -- # true 00:14:38.746 18:23:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:38.746 18:23:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:38.746 Cannot find device "nvmf_tgt_br" 00:14:38.746 18:23:35 -- nvmf/common.sh@157 -- # true 00:14:38.746 18:23:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:38.746 Cannot find device "nvmf_tgt_br2" 00:14:38.746 18:23:35 -- nvmf/common.sh@158 -- # true 00:14:38.746 18:23:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:38.746 18:23:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:38.746 18:23:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.746 18:23:35 -- nvmf/common.sh@161 -- # true 00:14:38.746 18:23:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.746 18:23:35 -- nvmf/common.sh@162 -- # true 00:14:38.746 18:23:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.746 18:23:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.746 18:23:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.746 18:23:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.746 18:23:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.746 18:23:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.746 18:23:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.746 18:23:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.746 18:23:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.746 18:23:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:38.746 18:23:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:38.746 18:23:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:38.746 18:23:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:38.746 18:23:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.746 18:23:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.746 18:23:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.746 18:23:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:38.746 18:23:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:38.746 18:23:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.746 18:23:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.746 18:23:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.746 18:23:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.746 18:23:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.746 18:23:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:38.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:38.746 00:14:38.746 --- 10.0.0.2 ping statistics --- 00:14:38.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.746 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:38.746 18:23:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:38.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:38.746 00:14:38.746 --- 10.0.0.3 ping statistics --- 00:14:38.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.746 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:38.746 18:23:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:38.746 00:14:38.746 --- 10.0.0.1 ping statistics --- 00:14:38.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.747 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:38.747 18:23:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.747 18:23:35 -- nvmf/common.sh@421 -- # return 0 00:14:38.747 18:23:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:38.747 18:23:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.747 18:23:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:38.747 18:23:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:38.747 18:23:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.747 18:23:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:38.747 18:23:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:38.747 18:23:35 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:38.747 18:23:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.747 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 18:23:35 -- host/identify.sh@19 -- # nvmfpid=79961 00:14:38.747 18:23:35 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.747 18:23:35 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.747 18:23:35 -- host/identify.sh@23 -- # waitforlisten 79961 00:14:38.747 18:23:35 -- common/autotest_common.sh@829 -- # '[' -z 79961 ']' 00:14:38.747 18:23:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.747 18:23:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.747 18:23:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.747 18:23:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.747 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 [2024-11-17 18:23:35.913183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:38.747 [2024-11-17 18:23:35.913311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.747 [2024-11-17 18:23:36.051607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.747 [2024-11-17 18:23:36.088181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:38.747 [2024-11-17 18:23:36.088820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.747 [2024-11-17 18:23:36.089061] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.747 [2024-11-17 18:23:36.089355] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.747 [2024-11-17 18:23:36.089747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.747 [2024-11-17 18:23:36.089929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.747 [2024-11-17 18:23:36.089996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.747 [2024-11-17 18:23:36.090027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.747 18:23:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.747 18:23:36 -- common/autotest_common.sh@862 -- # return 0 00:14:38.747 18:23:36 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.747 18:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.747 18:23:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 [2024-11-17 18:23:36.942524] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.747 18:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.747 18:23:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:38.747 18:23:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.747 18:23:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.747 18:23:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:38.747 18:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.747 18:23:36 -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 Malloc0 00:14:39.010 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.010 18:23:37 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.010 18:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.010 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.010 18:23:37 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:39.010 18:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.010 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.010 18:23:37 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.010 18:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.010 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 [2024-11-17 18:23:37.039664] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.010 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.010 18:23:37 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.010 18:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.010 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.010 18:23:37 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:39.010 18:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.010 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 [2024-11-17 18:23:37.055427] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:39.010 [ 00:14:39.010 { 00:14:39.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:39.010 "subtype": "Discovery", 00:14:39.010 "listen_addresses": [ 00:14:39.010 { 00:14:39.010 "transport": "TCP", 00:14:39.010 "trtype": "TCP", 00:14:39.010 "adrfam": "IPv4", 00:14:39.010 "traddr": "10.0.0.2", 00:14:39.010 "trsvcid": "4420" 00:14:39.010 } 00:14:39.010 ], 00:14:39.010 "allow_any_host": true, 00:14:39.010 "hosts": [] 00:14:39.010 }, 00:14:39.010 { 00:14:39.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.010 "subtype": "NVMe", 00:14:39.010 "listen_addresses": [ 00:14:39.010 { 00:14:39.010 "transport": "TCP", 00:14:39.010 "trtype": "TCP", 00:14:39.010 "adrfam": "IPv4", 00:14:39.010 "traddr": "10.0.0.2", 00:14:39.010 "trsvcid": "4420" 00:14:39.010 } 00:14:39.010 ], 00:14:39.010 "allow_any_host": true, 00:14:39.010 "hosts": [], 00:14:39.010 "serial_number": "SPDK00000000000001", 00:14:39.010 "model_number": "SPDK bdev Controller", 00:14:39.010 "max_namespaces": 32, 00:14:39.010 "min_cntlid": 1, 00:14:39.010 "max_cntlid": 65519, 00:14:39.010 "namespaces": [ 00:14:39.011 { 00:14:39.011 "nsid": 1, 00:14:39.011 "bdev_name": "Malloc0", 00:14:39.011 "name": "Malloc0", 00:14:39.011 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:39.011 "eui64": "ABCDEF0123456789", 00:14:39.011 "uuid": "8ab80fc7-8de9-4293-86c9-0097e9146ba0" 00:14:39.011 } 00:14:39.011 ] 00:14:39.011 } 00:14:39.011 ] 00:14:39.011 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.011 18:23:37 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:39.011 [2024-11-17 18:23:37.088893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:39.011 [2024-11-17 18:23:37.088931] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79996 ] 00:14:39.011 [2024-11-17 18:23:37.221387] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:39.011 [2024-11-17 18:23:37.221468] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:39.011 [2024-11-17 18:23:37.221476] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:39.011 [2024-11-17 18:23:37.221487] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:39.011 [2024-11-17 18:23:37.221498] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:39.011 [2024-11-17 18:23:37.221621] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:39.011 [2024-11-17 18:23:37.221710] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10bd540 0 00:14:39.011 [2024-11-17 18:23:37.233355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:39.011 [2024-11-17 18:23:37.233379] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:39.011 [2024-11-17 18:23:37.233385] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:39.011 [2024-11-17 18:23:37.233389] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:39.011 [2024-11-17 18:23:37.233436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.233444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.233448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.233463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:39.011 [2024-11-17 18:23:37.233496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.241365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.241394] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.241399] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.241417] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:39.011 [2024-11-17 18:23:37.241425] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:39.011 [2024-11-17 18:23:37.241433] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:39.011 [2024-11-17 18:23:37.241450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.241469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.011 [2024-11-17 18:23:37.241498] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.241560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.241567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.241571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.241582] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:39.011 [2024-11-17 18:23:37.241590] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:39.011 [2024-11-17 18:23:37.241598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241607] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.241615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.011 [2024-11-17 18:23:37.241634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.241684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.241691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.241695] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241699] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.241706] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:39.011 [2024-11-17 18:23:37.241716] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.011 [2024-11-17 18:23:37.241724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.241740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.011 [2024-11-17 18:23:37.241758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.241805] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.241812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.241822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241827] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.241834] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.011 [2024-11-17 18:23:37.241845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.241861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.011 [2024-11-17 18:23:37.241878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.241923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.241930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.241934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.241938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.241944] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:39.011 [2024-11-17 18:23:37.241951] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:39.011 [2024-11-17 18:23:37.241959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.011 [2024-11-17 18:23:37.242065] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:39.011 [2024-11-17 18:23:37.242070] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.011 [2024-11-17 18:23:37.242079] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.242084] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.242088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.242096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.011 [2024-11-17 18:23:37.242114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.242159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.242166] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.242169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.242174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.242180] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.011 [2024-11-17 18:23:37.242191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.242196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.242200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.011 [2024-11-17 18:23:37.242208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.011 [2024-11-17 18:23:37.242224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.011 [2024-11-17 18:23:37.242269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.011 [2024-11-17 18:23:37.242289] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.011 [2024-11-17 18:23:37.242294] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.011 [2024-11-17 18:23:37.242299] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.011 [2024-11-17 18:23:37.242305] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.012 [2024-11-17 18:23:37.242311] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:39.012 [2024-11-17 18:23:37.242320] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:39.012 [2024-11-17 18:23:37.242336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.012 [2024-11-17 18:23:37.242347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.012 [2024-11-17 18:23:37.242395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.012 [2024-11-17 18:23:37.242484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.012 [2024-11-17 18:23:37.242491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.012 [2024-11-17 18:23:37.242496] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242510] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bd540): datao=0, datal=4096, cccid=0 00:14:39.012 [2024-11-17 18:23:37.242516] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f6220) on tqpair(0x10bd540): expected_datao=0, payload_size=4096 00:14:39.012 [2024-11-17 18:23:37.242526] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242531] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.012 [2024-11-17 18:23:37.242546] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.012 [2024-11-17 18:23:37.242550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.012 [2024-11-17 18:23:37.242564] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:39.012 [2024-11-17 18:23:37.242570] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:39.012 [2024-11-17 18:23:37.242575] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:39.012 [2024-11-17 18:23:37.242581] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:39.012 [2024-11-17 18:23:37.242586] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:39.012 [2024-11-17 18:23:37.242591] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:39.012 [2024-11-17 18:23:37.242605] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.012 [2024-11-17 18:23:37.242614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242623] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.012 [2024-11-17 18:23:37.242652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.012 [2024-11-17 18:23:37.242705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.012 [2024-11-17 18:23:37.242712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.012 [2024-11-17 18:23:37.242715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6220) on tqpair=0x10bd540 00:14:39.012 [2024-11-17 18:23:37.242729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.012 [2024-11-17 18:23:37.242751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.012 [2024-11-17 18:23:37.242772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.012 [2024-11-17 18:23:37.242793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.012 [2024-11-17 18:23:37.242813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.012 [2024-11-17 18:23:37.242826] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.012 [2024-11-17 18:23:37.242834] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.242842] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.242850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.012 [2024-11-17 18:23:37.242870] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6220, cid 0, qid 0 00:14:39.012 [2024-11-17 18:23:37.242877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6380, cid 1, qid 0 00:14:39.012 [2024-11-17 18:23:37.242882] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f64e0, cid 2, qid 0 00:14:39.012 [2024-11-17 18:23:37.242887] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.012 [2024-11-17 18:23:37.242900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f67a0, cid 4, qid 0 00:14:39.012 [2024-11-17 18:23:37.242984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.012 [2024-11-17 18:23:37.243002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.012 [2024-11-17 18:23:37.243007] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243011] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f67a0) on tqpair=0x10bd540 00:14:39.012 [2024-11-17 18:23:37.243018] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:39.012 [2024-11-17 18:23:37.243024] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:39.012 [2024-11-17 18:23:37.243036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.243053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.012 [2024-11-17 18:23:37.243071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f67a0, cid 4, qid 0 00:14:39.012 [2024-11-17 18:23:37.243137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.012 [2024-11-17 18:23:37.243144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.012 [2024-11-17 18:23:37.243148] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243152] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bd540): datao=0, datal=4096, cccid=4 00:14:39.012 [2024-11-17 18:23:37.243157] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f67a0) on tqpair(0x10bd540): expected_datao=0, payload_size=4096 00:14:39.012 [2024-11-17 18:23:37.243165] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243169] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243178] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.012 [2024-11-17 18:23:37.243184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.012 [2024-11-17 18:23:37.243188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243192] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f67a0) on tqpair=0x10bd540 00:14:39.012 [2024-11-17 18:23:37.243207] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:39.012 [2024-11-17 18:23:37.243233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.243251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.012 [2024-11-17 18:23:37.243259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.012 [2024-11-17 18:23:37.243268] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bd540) 00:14:39.012 [2024-11-17 18:23:37.243288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.012 [2024-11-17 18:23:37.243317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f67a0, cid 4, qid 0 00:14:39.012 [2024-11-17 18:23:37.243324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6900, cid 5, qid 0 00:14:39.012 [2024-11-17 18:23:37.243434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.012 [2024-11-17 18:23:37.243443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.012 [2024-11-17 18:23:37.243447] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243451] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bd540): datao=0, datal=1024, cccid=4 00:14:39.013 [2024-11-17 18:23:37.243456] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f67a0) on tqpair(0x10bd540): expected_datao=0, payload_size=1024 00:14:39.013 [2024-11-17 18:23:37.243464] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243469] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243475] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.013 [2024-11-17 18:23:37.243481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.013 [2024-11-17 18:23:37.243485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6900) on tqpair=0x10bd540 00:14:39.013 [2024-11-17 18:23:37.243509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.013 [2024-11-17 18:23:37.243516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.013 [2024-11-17 18:23:37.243520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f67a0) on tqpair=0x10bd540 00:14:39.013 [2024-11-17 18:23:37.243542] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bd540) 00:14:39.013 [2024-11-17 18:23:37.243560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.013 [2024-11-17 18:23:37.243584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f67a0, cid 4, qid 0 00:14:39.013 [2024-11-17 18:23:37.243650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.013 [2024-11-17 18:23:37.243657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.013 [2024-11-17 18:23:37.243661] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243665] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bd540): datao=0, datal=3072, cccid=4 00:14:39.013 [2024-11-17 18:23:37.243670] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f67a0) on tqpair(0x10bd540): expected_datao=0, payload_size=3072 00:14:39.013 [2024-11-17 18:23:37.243678] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243682] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.013 [2024-11-17 18:23:37.243696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.013 [2024-11-17 18:23:37.243700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f67a0) on tqpair=0x10bd540 00:14:39.013 [2024-11-17 18:23:37.243715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bd540) 00:14:39.013 [2024-11-17 18:23:37.243732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.013 [2024-11-17 18:23:37.243754] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f67a0, cid 4, qid 0 00:14:39.013 [2024-11-17 18:23:37.243825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.013 [2024-11-17 18:23:37.243832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.013 [2024-11-17 18:23:37.243836] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243840] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bd540): datao=0, datal=8, cccid=4 00:14:39.013 [2024-11-17 18:23:37.243845] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f67a0) on tqpair(0x10bd540): expected_datao=0, payload_size=8 00:14:39.013 [2024-11-17 18:23:37.243852] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243856] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.013 [2024-11-17 18:23:37.243878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.013 [2024-11-17 18:23:37.243882] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.013 [2024-11-17 18:23:37.243886] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f67a0) on tqpair=0x10bd540 00:14:39.013 ===================================================== 00:14:39.013 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:39.013 ===================================================== 00:14:39.013 Controller Capabilities/Features 00:14:39.013 ================================ 00:14:39.013 Vendor ID: 0000 00:14:39.013 Subsystem Vendor ID: 0000 00:14:39.013 Serial Number: .................... 00:14:39.013 Model Number: ........................................ 00:14:39.013 Firmware Version: 24.01.1 00:14:39.013 Recommended Arb Burst: 0 00:14:39.013 IEEE OUI Identifier: 00 00 00 00:14:39.013 Multi-path I/O 00:14:39.013 May have multiple subsystem ports: No 00:14:39.013 May have multiple controllers: No 00:14:39.013 Associated with SR-IOV VF: No 00:14:39.013 Max Data Transfer Size: 131072 00:14:39.013 Max Number of Namespaces: 0 00:14:39.013 Max Number of I/O Queues: 1024 00:14:39.013 NVMe Specification Version (VS): 1.3 00:14:39.013 NVMe Specification Version (Identify): 1.3 00:14:39.013 Maximum Queue Entries: 128 00:14:39.013 Contiguous Queues Required: Yes 00:14:39.013 Arbitration Mechanisms Supported 00:14:39.013 Weighted Round Robin: Not Supported 00:14:39.013 Vendor Specific: Not Supported 00:14:39.013 Reset Timeout: 15000 ms 00:14:39.013 Doorbell Stride: 4 bytes 00:14:39.013 NVM Subsystem Reset: Not Supported 00:14:39.013 Command Sets Supported 00:14:39.013 NVM Command Set: Supported 00:14:39.013 Boot Partition: Not Supported 00:14:39.013 Memory Page Size Minimum: 4096 bytes 00:14:39.013 Memory Page Size Maximum: 4096 bytes 00:14:39.013 Persistent Memory Region: Not Supported 00:14:39.013 Optional Asynchronous Events Supported 00:14:39.013 Namespace Attribute Notices: Not Supported 00:14:39.013 Firmware Activation Notices: Not Supported 00:14:39.013 ANA Change Notices: Not Supported 00:14:39.013 PLE Aggregate Log Change Notices: Not Supported 00:14:39.013 LBA Status Info Alert Notices: Not Supported 00:14:39.013 EGE Aggregate Log Change Notices: Not Supported 00:14:39.013 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.013 Zone Descriptor Change Notices: Not Supported 00:14:39.013 Discovery Log Change Notices: Supported 00:14:39.013 Controller Attributes 00:14:39.013 128-bit Host Identifier: Not Supported 00:14:39.013 Non-Operational Permissive Mode: Not Supported 00:14:39.013 NVM Sets: Not Supported 00:14:39.013 Read Recovery Levels: Not Supported 00:14:39.013 Endurance Groups: Not Supported 00:14:39.013 Predictable Latency Mode: Not Supported 00:14:39.013 Traffic Based Keep ALive: Not Supported 00:14:39.013 Namespace Granularity: Not Supported 00:14:39.013 SQ Associations: Not Supported 00:14:39.013 UUID List: Not Supported 00:14:39.013 Multi-Domain Subsystem: Not Supported 00:14:39.013 Fixed Capacity Management: Not Supported 00:14:39.013 Variable Capacity Management: Not Supported 00:14:39.013 Delete Endurance Group: Not Supported 00:14:39.013 Delete NVM Set: Not Supported 00:14:39.013 Extended LBA Formats Supported: Not Supported 00:14:39.013 Flexible Data Placement Supported: Not Supported 00:14:39.013 00:14:39.013 Controller Memory Buffer Support 00:14:39.013 ================================ 00:14:39.013 Supported: No 00:14:39.013 00:14:39.013 Persistent Memory Region Support 00:14:39.013 ================================ 00:14:39.013 Supported: No 00:14:39.013 00:14:39.013 Admin Command Set Attributes 00:14:39.013 ============================ 00:14:39.013 Security Send/Receive: Not Supported 00:14:39.013 Format NVM: Not Supported 00:14:39.013 Firmware Activate/Download: Not Supported 00:14:39.013 Namespace Management: Not Supported 00:14:39.013 Device Self-Test: Not Supported 00:14:39.013 Directives: Not Supported 00:14:39.013 NVMe-MI: Not Supported 00:14:39.013 Virtualization Management: Not Supported 00:14:39.013 Doorbell Buffer Config: Not Supported 00:14:39.013 Get LBA Status Capability: Not Supported 00:14:39.013 Command & Feature Lockdown Capability: Not Supported 00:14:39.013 Abort Command Limit: 1 00:14:39.013 Async Event Request Limit: 4 00:14:39.013 Number of Firmware Slots: N/A 00:14:39.013 Firmware Slot 1 Read-Only: N/A 00:14:39.013 Firmware Activation Without Reset: N/A 00:14:39.013 Multiple Update Detection Support: N/A 00:14:39.013 Firmware Update Granularity: No Information Provided 00:14:39.013 Per-Namespace SMART Log: No 00:14:39.013 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.013 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:39.013 Command Effects Log Page: Not Supported 00:14:39.013 Get Log Page Extended Data: Supported 00:14:39.013 Telemetry Log Pages: Not Supported 00:14:39.013 Persistent Event Log Pages: Not Supported 00:14:39.013 Supported Log Pages Log Page: May Support 00:14:39.013 Commands Supported & Effects Log Page: Not Supported 00:14:39.013 Feature Identifiers & Effects Log Page:May Support 00:14:39.013 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.013 Data Area 4 for Telemetry Log: Not Supported 00:14:39.013 Error Log Page Entries Supported: 128 00:14:39.013 Keep Alive: Not Supported 00:14:39.014 00:14:39.014 NVM Command Set Attributes 00:14:39.014 ========================== 00:14:39.014 Submission Queue Entry Size 00:14:39.014 Max: 1 00:14:39.014 Min: 1 00:14:39.014 Completion Queue Entry Size 00:14:39.014 Max: 1 00:14:39.014 Min: 1 00:14:39.014 Number of Namespaces: 0 00:14:39.014 Compare Command: Not Supported 00:14:39.014 Write Uncorrectable Command: Not Supported 00:14:39.014 Dataset Management Command: Not Supported 00:14:39.014 Write Zeroes Command: Not Supported 00:14:39.014 Set Features Save Field: Not Supported 00:14:39.014 Reservations: Not Supported 00:14:39.014 Timestamp: Not Supported 00:14:39.014 Copy: Not Supported 00:14:39.014 Volatile Write Cache: Not Present 00:14:39.014 Atomic Write Unit (Normal): 1 00:14:39.014 Atomic Write Unit (PFail): 1 00:14:39.014 Atomic Compare & Write Unit: 1 00:14:39.014 Fused Compare & Write: Supported 00:14:39.014 Scatter-Gather List 00:14:39.014 SGL Command Set: Supported 00:14:39.014 SGL Keyed: Supported 00:14:39.014 SGL Bit Bucket Descriptor: Not Supported 00:14:39.014 SGL Metadata Pointer: Not Supported 00:14:39.014 Oversized SGL: Not Supported 00:14:39.014 SGL Metadata Address: Not Supported 00:14:39.014 SGL Offset: Supported 00:14:39.014 Transport SGL Data Block: Not Supported 00:14:39.014 Replay Protected Memory Block: Not Supported 00:14:39.014 00:14:39.014 Firmware Slot Information 00:14:39.014 ========================= 00:14:39.014 Active slot: 0 00:14:39.014 00:14:39.014 00:14:39.014 Error Log 00:14:39.014 ========= 00:14:39.014 00:14:39.014 Active Namespaces 00:14:39.014 ================= 00:14:39.014 Discovery Log Page 00:14:39.014 ================== 00:14:39.014 Generation Counter: 2 00:14:39.014 Number of Records: 2 00:14:39.014 Record Format: 0 00:14:39.014 00:14:39.014 Discovery Log Entry 0 00:14:39.014 ---------------------- 00:14:39.014 Transport Type: 3 (TCP) 00:14:39.014 Address Family: 1 (IPv4) 00:14:39.014 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:39.014 Entry Flags: 00:14:39.014 Duplicate Returned Information: 1 00:14:39.014 Explicit Persistent Connection Support for Discovery: 1 00:14:39.014 Transport Requirements: 00:14:39.014 Secure Channel: Not Required 00:14:39.014 Port ID: 0 (0x0000) 00:14:39.014 Controller ID: 65535 (0xffff) 00:14:39.014 Admin Max SQ Size: 128 00:14:39.014 Transport Service Identifier: 4420 00:14:39.014 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:39.014 Transport Address: 10.0.0.2 00:14:39.014 Discovery Log Entry 1 00:14:39.014 ---------------------- 00:14:39.014 Transport Type: 3 (TCP) 00:14:39.014 Address Family: 1 (IPv4) 00:14:39.014 Subsystem Type: 2 (NVM Subsystem) 00:14:39.014 Entry Flags: 00:14:39.014 Duplicate Returned Information: 0 00:14:39.014 Explicit Persistent Connection Support for Discovery: 0 00:14:39.014 Transport Requirements: 00:14:39.014 Secure Channel: Not Required 00:14:39.014 Port ID: 0 (0x0000) 00:14:39.014 Controller ID: 65535 (0xffff) 00:14:39.014 Admin Max SQ Size: 128 00:14:39.014 Transport Service Identifier: 4420 00:14:39.014 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:39.014 Transport Address: 10.0.0.2 [2024-11-17 18:23:37.244001] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:39.014 [2024-11-17 18:23:37.244022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.014 [2024-11-17 18:23:37.244030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.014 [2024-11-17 18:23:37.244037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.014 [2024-11-17 18:23:37.244043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.014 [2024-11-17 18:23:37.244053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244062] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.014 [2024-11-17 18:23:37.244070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.014 [2024-11-17 18:23:37.244098] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.014 [2024-11-17 18:23:37.244157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.014 [2024-11-17 18:23:37.244164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.014 [2024-11-17 18:23:37.244168] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.014 [2024-11-17 18:23:37.244182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.014 [2024-11-17 18:23:37.244198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.014 [2024-11-17 18:23:37.244219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.014 [2024-11-17 18:23:37.244300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.014 [2024-11-17 18:23:37.244309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.014 [2024-11-17 18:23:37.244312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244317] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.014 [2024-11-17 18:23:37.244323] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:39.014 [2024-11-17 18:23:37.244328] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:39.014 [2024-11-17 18:23:37.244339] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244348] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.014 [2024-11-17 18:23:37.244356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.014 [2024-11-17 18:23:37.244376] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.014 [2024-11-17 18:23:37.244425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.014 [2024-11-17 18:23:37.244432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.014 [2024-11-17 18:23:37.244436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.014 [2024-11-17 18:23:37.244453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.014 [2024-11-17 18:23:37.244469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.014 [2024-11-17 18:23:37.244486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.014 [2024-11-17 18:23:37.244531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.014 [2024-11-17 18:23:37.244538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.014 [2024-11-17 18:23:37.244542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.014 [2024-11-17 18:23:37.244557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.014 [2024-11-17 18:23:37.244573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.014 [2024-11-17 18:23:37.244590] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.014 [2024-11-17 18:23:37.244638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.014 [2024-11-17 18:23:37.244644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.014 [2024-11-17 18:23:37.244648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.014 [2024-11-17 18:23:37.244664] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244672] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.014 [2024-11-17 18:23:37.244680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.014 [2024-11-17 18:23:37.244696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.014 [2024-11-17 18:23:37.244744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.014 [2024-11-17 18:23:37.244751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.014 [2024-11-17 18:23:37.244755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.014 [2024-11-17 18:23:37.244759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.244770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244779] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.015 [2024-11-17 18:23:37.244786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.015 [2024-11-17 18:23:37.244802] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.015 [2024-11-17 18:23:37.244850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.015 [2024-11-17 18:23:37.244857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.015 [2024-11-17 18:23:37.244861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.244877] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244881] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244885] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.015 [2024-11-17 18:23:37.244893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.015 [2024-11-17 18:23:37.244909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.015 [2024-11-17 18:23:37.244954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.015 [2024-11-17 18:23:37.244961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.015 [2024-11-17 18:23:37.244965] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244969] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.244980] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.244989] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.015 [2024-11-17 18:23:37.244996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.015 [2024-11-17 18:23:37.245013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.015 [2024-11-17 18:23:37.245061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.015 [2024-11-17 18:23:37.245067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.015 [2024-11-17 18:23:37.245071] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.245075] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.245086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.245091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.245095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.015 [2024-11-17 18:23:37.245103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.015 [2024-11-17 18:23:37.245119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.015 [2024-11-17 18:23:37.245173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.015 [2024-11-17 18:23:37.245179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.015 [2024-11-17 18:23:37.245183] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.245188] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.245199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.245204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.245208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.015 [2024-11-17 18:23:37.245215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.015 [2024-11-17 18:23:37.245232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.015 [2024-11-17 18:23:37.248322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.015 [2024-11-17 18:23:37.248345] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.015 [2024-11-17 18:23:37.248350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.248355] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.248370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.248376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.248380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bd540) 00:14:39.015 [2024-11-17 18:23:37.248389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.015 [2024-11-17 18:23:37.248414] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f6640, cid 3, qid 0 00:14:39.015 [2024-11-17 18:23:37.248464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.015 [2024-11-17 18:23:37.248471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.015 [2024-11-17 18:23:37.248474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.015 [2024-11-17 18:23:37.248479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f6640) on tqpair=0x10bd540 00:14:39.015 [2024-11-17 18:23:37.248488] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:39.015 00:14:39.015 18:23:37 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:39.280 [2024-11-17 18:23:37.284268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:39.280 [2024-11-17 18:23:37.284336] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80002 ] 00:14:39.280 [2024-11-17 18:23:37.420731] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:39.280 [2024-11-17 18:23:37.420807] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:39.280 [2024-11-17 18:23:37.420815] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:39.280 [2024-11-17 18:23:37.420827] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:39.280 [2024-11-17 18:23:37.420839] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:39.280 [2024-11-17 18:23:37.420967] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:39.280 [2024-11-17 18:23:37.421035] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x668540 0 00:14:39.280 [2024-11-17 18:23:37.428373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:39.280 [2024-11-17 18:23:37.428400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:39.280 [2024-11-17 18:23:37.428407] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:39.280 [2024-11-17 18:23:37.428411] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:39.280 [2024-11-17 18:23:37.428455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.280 [2024-11-17 18:23:37.428464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.280 [2024-11-17 18:23:37.428468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.280 [2024-11-17 18:23:37.428483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:39.280 [2024-11-17 18:23:37.428515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.280 [2024-11-17 18:23:37.436399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.280 [2024-11-17 18:23:37.436424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.280 [2024-11-17 18:23:37.436430] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.280 [2024-11-17 18:23:37.436436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.280 [2024-11-17 18:23:37.436450] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:39.280 [2024-11-17 18:23:37.436458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:39.280 [2024-11-17 18:23:37.436464] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:39.280 [2024-11-17 18:23:37.436480] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.280 [2024-11-17 18:23:37.436485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.280 [2024-11-17 18:23:37.436489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.280 [2024-11-17 18:23:37.436499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.436526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.436587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.436595] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.436599] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436603] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.436610] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:39.281 [2024-11-17 18:23:37.436618] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:39.281 [2024-11-17 18:23:37.436631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436635] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.436647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.436667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.436714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.436721] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.436725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436729] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.436736] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:39.281 [2024-11-17 18:23:37.436745] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.281 [2024-11-17 18:23:37.436753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.436770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.436788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.436837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.436845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.436849] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.436859] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.281 [2024-11-17 18:23:37.436870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436879] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.436887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.436904] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.436955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.436962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.436966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.436970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.436976] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:39.281 [2024-11-17 18:23:37.436981] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:39.281 [2024-11-17 18:23:37.436990] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.281 [2024-11-17 18:23:37.437096] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:39.281 [2024-11-17 18:23:37.437100] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.281 [2024-11-17 18:23:37.437110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437118] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.437126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.437144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.437198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.437205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.437209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437214] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.437219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.281 [2024-11-17 18:23:37.437230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.437250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.437267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.437328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.437338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.437342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437346] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.437352] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.281 [2024-11-17 18:23:37.437358] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:39.281 [2024-11-17 18:23:37.437366] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:39.281 [2024-11-17 18:23:37.437382] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.281 [2024-11-17 18:23:37.437393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.437410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.281 [2024-11-17 18:23:37.437431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.437519] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.281 [2024-11-17 18:23:37.437527] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.281 [2024-11-17 18:23:37.437531] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437536] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=4096, cccid=0 00:14:39.281 [2024-11-17 18:23:37.437541] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1220) on tqpair(0x668540): expected_datao=0, payload_size=4096 00:14:39.281 [2024-11-17 18:23:37.437551] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437556] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437565] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.437572] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.437576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.437589] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:39.281 [2024-11-17 18:23:37.437594] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:39.281 [2024-11-17 18:23:37.437599] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:39.281 [2024-11-17 18:23:37.437604] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:39.281 [2024-11-17 18:23:37.437609] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:39.281 [2024-11-17 18:23:37.437614] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:39.281 [2024-11-17 18:23:37.437629] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.281 [2024-11-17 18:23:37.437638] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.281 [2024-11-17 18:23:37.437654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.281 [2024-11-17 18:23:37.437675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.281 [2024-11-17 18:23:37.437733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.281 [2024-11-17 18:23:37.437740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.281 [2024-11-17 18:23:37.437744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1220) on tqpair=0x668540 00:14:39.281 [2024-11-17 18:23:37.437757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.281 [2024-11-17 18:23:37.437761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.437773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.282 [2024-11-17 18:23:37.437779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.437794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.282 [2024-11-17 18:23:37.437800] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437808] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.437815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.282 [2024-11-17 18:23:37.437821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437829] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.437835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.282 [2024-11-17 18:23:37.437841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.437855] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.437863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.437871] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.437879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.282 [2024-11-17 18:23:37.437898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1220, cid 0, qid 0 00:14:39.282 [2024-11-17 18:23:37.437906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1380, cid 1, qid 0 00:14:39.282 [2024-11-17 18:23:37.437911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a14e0, cid 2, qid 0 00:14:39.282 [2024-11-17 18:23:37.437916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.282 [2024-11-17 18:23:37.437921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.282 [2024-11-17 18:23:37.438026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.282 [2024-11-17 18:23:37.438044] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.282 [2024-11-17 18:23:37.438050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.282 [2024-11-17 18:23:37.438061] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:39.282 [2024-11-17 18:23:37.438067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438077] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438088] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.438114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.282 [2024-11-17 18:23:37.438134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.282 [2024-11-17 18:23:37.438188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.282 [2024-11-17 18:23:37.438196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.282 [2024-11-17 18:23:37.438200] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438204] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.282 [2024-11-17 18:23:37.438269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438298] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438314] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.438326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.282 [2024-11-17 18:23:37.438348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.282 [2024-11-17 18:23:37.438423] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.282 [2024-11-17 18:23:37.438430] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.282 [2024-11-17 18:23:37.438435] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438439] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=4096, cccid=4 00:14:39.282 [2024-11-17 18:23:37.438444] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a17a0) on tqpair(0x668540): expected_datao=0, payload_size=4096 00:14:39.282 [2024-11-17 18:23:37.438453] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438457] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.282 [2024-11-17 18:23:37.438473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.282 [2024-11-17 18:23:37.438477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.282 [2024-11-17 18:23:37.438508] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:39.282 [2024-11-17 18:23:37.438522] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438534] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.438559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.282 [2024-11-17 18:23:37.438580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.282 [2024-11-17 18:23:37.438654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.282 [2024-11-17 18:23:37.438661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.282 [2024-11-17 18:23:37.438666] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438670] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=4096, cccid=4 00:14:39.282 [2024-11-17 18:23:37.438675] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a17a0) on tqpair(0x668540): expected_datao=0, payload_size=4096 00:14:39.282 [2024-11-17 18:23:37.438683] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438688] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.282 [2024-11-17 18:23:37.438703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.282 [2024-11-17 18:23:37.438707] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.282 [2024-11-17 18:23:37.438729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438740] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438758] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.282 [2024-11-17 18:23:37.438765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.282 [2024-11-17 18:23:37.438785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.282 [2024-11-17 18:23:37.438845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.282 [2024-11-17 18:23:37.438852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.282 [2024-11-17 18:23:37.438857] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438861] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=4096, cccid=4 00:14:39.282 [2024-11-17 18:23:37.438865] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a17a0) on tqpair(0x668540): expected_datao=0, payload_size=4096 00:14:39.282 [2024-11-17 18:23:37.438874] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438878] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.282 [2024-11-17 18:23:37.438898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.282 [2024-11-17 18:23:37.438902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.282 [2024-11-17 18:23:37.438906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.282 [2024-11-17 18:23:37.438915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:39.282 [2024-11-17 18:23:37.438924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:39.283 [2024-11-17 18:23:37.438936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:39.283 [2024-11-17 18:23:37.438943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:39.283 [2024-11-17 18:23:37.438949] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:39.283 [2024-11-17 18:23:37.438954] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:39.283 [2024-11-17 18:23:37.438959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:39.283 [2024-11-17 18:23:37.438965] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:39.283 [2024-11-17 18:23:37.438981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.438986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.438990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.438998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.283 [2024-11-17 18:23:37.439045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.283 [2024-11-17 18:23:37.439053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1900, cid 5, qid 0 00:14:39.283 [2024-11-17 18:23:37.439140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.283 [2024-11-17 18:23:37.439148] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.283 [2024-11-17 18:23:37.439152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439156] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.283 [2024-11-17 18:23:37.439164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.283 [2024-11-17 18:23:37.439170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.283 [2024-11-17 18:23:37.439174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1900) on tqpair=0x668540 00:14:39.283 [2024-11-17 18:23:37.439189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439199] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1900, cid 5, qid 0 00:14:39.283 [2024-11-17 18:23:37.439273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.283 [2024-11-17 18:23:37.439280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.283 [2024-11-17 18:23:37.439284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1900) on tqpair=0x668540 00:14:39.283 [2024-11-17 18:23:37.439312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439320] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1900, cid 5, qid 0 00:14:39.283 [2024-11-17 18:23:37.439409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.283 [2024-11-17 18:23:37.439416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.283 [2024-11-17 18:23:37.439420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1900) on tqpair=0x668540 00:14:39.283 [2024-11-17 18:23:37.439435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1900, cid 5, qid 0 00:14:39.283 [2024-11-17 18:23:37.439526] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.283 [2024-11-17 18:23:37.439534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.283 [2024-11-17 18:23:37.439538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1900) on tqpair=0x668540 00:14:39.283 [2024-11-17 18:23:37.439556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439585] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439589] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439612] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x668540) 00:14:39.283 [2024-11-17 18:23:37.439642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.283 [2024-11-17 18:23:37.439661] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1900, cid 5, qid 0 00:14:39.283 [2024-11-17 18:23:37.439668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a17a0, cid 4, qid 0 00:14:39.283 [2024-11-17 18:23:37.439673] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1a60, cid 6, qid 0 00:14:39.283 [2024-11-17 18:23:37.439678] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1bc0, cid 7, qid 0 00:14:39.283 [2024-11-17 18:23:37.439818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.283 [2024-11-17 18:23:37.439826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.283 [2024-11-17 18:23:37.439830] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439834] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=8192, cccid=5 00:14:39.283 [2024-11-17 18:23:37.439838] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1900) on tqpair(0x668540): expected_datao=0, payload_size=8192 00:14:39.283 [2024-11-17 18:23:37.439858] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439863] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.283 [2024-11-17 18:23:37.439876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.283 [2024-11-17 18:23:37.439880] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439884] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=512, cccid=4 00:14:39.283 [2024-11-17 18:23:37.439888] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a17a0) on tqpair(0x668540): expected_datao=0, payload_size=512 00:14:39.283 [2024-11-17 18:23:37.439896] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439900] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.283 [2024-11-17 18:23:37.439912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.283 [2024-11-17 18:23:37.439916] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439920] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=512, cccid=6 00:14:39.283 [2024-11-17 18:23:37.439925] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1a60) on tqpair(0x668540): expected_datao=0, payload_size=512 00:14:39.283 [2024-11-17 18:23:37.439932] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439936] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.283 [2024-11-17 18:23:37.439948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.283 [2024-11-17 18:23:37.439952] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439956] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x668540): datao=0, datal=4096, cccid=7 00:14:39.283 [2024-11-17 18:23:37.439960] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1bc0) on tqpair(0x668540): expected_datao=0, payload_size=4096 00:14:39.283 [2024-11-17 18:23:37.439968] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439972] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.283 [2024-11-17 18:23:37.439980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.283 [2024-11-17 18:23:37.439986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.283 [2024-11-17 18:23:37.439990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.284 [2024-11-17 18:23:37.439994] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1900) on tqpair=0x668540 00:14:39.284 [2024-11-17 18:23:37.440011] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.284 [2024-11-17 18:23:37.440018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.284 [2024-11-17 18:23:37.440022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.284 [2024-11-17 18:23:37.440026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a17a0) on tqpair=0x668540 00:14:39.284 [2024-11-17 18:23:37.440036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.284 [2024-11-17 18:23:37.440043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.284 [2024-11-17 18:23:37.440047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.284 [2024-11-17 18:23:37.440051] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1a60) on tqpair=0x668540 00:14:39.284 [2024-11-17 18:23:37.440059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.284 [2024-11-17 18:23:37.440065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.284 ===================================================== 00:14:39.284 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.284 ===================================================== 00:14:39.284 Controller Capabilities/Features 00:14:39.284 ================================ 00:14:39.284 Vendor ID: 8086 00:14:39.284 Subsystem Vendor ID: 8086 00:14:39.284 Serial Number: SPDK00000000000001 00:14:39.284 Model Number: SPDK bdev Controller 00:14:39.284 Firmware Version: 24.01.1 00:14:39.284 Recommended Arb Burst: 6 00:14:39.284 IEEE OUI Identifier: e4 d2 5c 00:14:39.284 Multi-path I/O 00:14:39.284 May have multiple subsystem ports: Yes 00:14:39.284 May have multiple controllers: Yes 00:14:39.284 Associated with SR-IOV VF: No 00:14:39.284 Max Data Transfer Size: 131072 00:14:39.284 Max Number of Namespaces: 32 00:14:39.284 Max Number of I/O Queues: 127 00:14:39.284 NVMe Specification Version (VS): 1.3 00:14:39.284 NVMe Specification Version (Identify): 1.3 00:14:39.284 Maximum Queue Entries: 128 00:14:39.284 Contiguous Queues Required: Yes 00:14:39.284 Arbitration Mechanisms Supported 00:14:39.284 Weighted Round Robin: Not Supported 00:14:39.284 Vendor Specific: Not Supported 00:14:39.284 Reset Timeout: 15000 ms 00:14:39.284 Doorbell Stride: 4 bytes 00:14:39.284 NVM Subsystem Reset: Not Supported 00:14:39.284 Command Sets Supported 00:14:39.284 NVM Command Set: Supported 00:14:39.284 Boot Partition: Not Supported 00:14:39.284 Memory Page Size Minimum: 4096 bytes 00:14:39.284 Memory Page Size Maximum: 4096 bytes 00:14:39.284 Persistent Memory Region: Not Supported 00:14:39.284 Optional Asynchronous Events Supported 00:14:39.284 Namespace Attribute Notices: Supported 00:14:39.284 Firmware Activation Notices: Not Supported 00:14:39.284 ANA Change Notices: Not Supported 00:14:39.284 PLE Aggregate Log Change Notices: Not Supported 00:14:39.284 LBA Status Info Alert Notices: Not Supported 00:14:39.284 EGE Aggregate Log Change Notices: Not Supported 00:14:39.284 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.284 Zone Descriptor Change Notices: Not Supported 00:14:39.284 Discovery Log Change Notices: Not Supported 00:14:39.284 Controller Attributes 00:14:39.284 128-bit Host Identifier: Supported 00:14:39.284 Non-Operational Permissive Mode: Not Supported 00:14:39.284 NVM Sets: Not Supported 00:14:39.284 Read Recovery Levels: Not Supported 00:14:39.284 Endurance Groups: Not Supported 00:14:39.284 Predictable Latency Mode: Not Supported 00:14:39.284 Traffic Based Keep ALive: Not Supported 00:14:39.284 Namespace Granularity: Not Supported 00:14:39.284 SQ Associations: Not Supported 00:14:39.284 UUID List: Not Supported 00:14:39.284 Multi-Domain Subsystem: Not Supported 00:14:39.284 Fixed Capacity Management: Not Supported 00:14:39.284 Variable Capacity Management: Not Supported 00:14:39.284 Delete Endurance Group: Not Supported 00:14:39.284 Delete NVM Set: Not Supported 00:14:39.284 Extended LBA Formats Supported: Not Supported 00:14:39.284 Flexible Data Placement Supported: Not Supported 00:14:39.284 00:14:39.284 Controller Memory Buffer Support 00:14:39.284 ================================ 00:14:39.284 Supported: No 00:14:39.284 00:14:39.284 Persistent Memory Region Support 00:14:39.284 ================================ 00:14:39.284 Supported: No 00:14:39.284 00:14:39.284 Admin Command Set Attributes 00:14:39.284 ============================ 00:14:39.284 Security Send/Receive: Not Supported 00:14:39.284 Format NVM: Not Supported 00:14:39.284 Firmware Activate/Download: Not Supported 00:14:39.284 Namespace Management: Not Supported 00:14:39.284 Device Self-Test: Not Supported 00:14:39.284 Directives: Not Supported 00:14:39.284 NVMe-MI: Not Supported 00:14:39.284 Virtualization Management: Not Supported 00:14:39.284 Doorbell Buffer Config: Not Supported 00:14:39.284 Get LBA Status Capability: Not Supported 00:14:39.284 Command & Feature Lockdown Capability: Not Supported 00:14:39.284 Abort Command Limit: 4 00:14:39.284 Async Event Request Limit: 4 00:14:39.284 Number of Firmware Slots: N/A 00:14:39.284 Firmware Slot 1 Read-Only: N/A 00:14:39.284 Firmware Activation Without Reset: N/A 00:14:39.284 Multiple Update Detection Support: N/A 00:14:39.284 Firmware Update Granularity: No Information Provided 00:14:39.284 Per-Namespace SMART Log: No 00:14:39.284 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.284 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:39.284 Command Effects Log Page: Supported 00:14:39.284 Get Log Page Extended Data: Supported 00:14:39.284 Telemetry Log Pages: Not Supported 00:14:39.284 Persistent Event Log Pages: Not Supported 00:14:39.284 Supported Log Pages Log Page: May Support 00:14:39.284 Commands Supported & Effects Log Page: Not Supported 00:14:39.284 Feature Identifiers & Effects Log Page:May Support 00:14:39.284 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.284 Data Area 4 for Telemetry Log: Not Supported 00:14:39.284 Error Log Page Entries Supported: 128 00:14:39.284 Keep Alive: Supported 00:14:39.284 Keep Alive Granularity: 10000 ms 00:14:39.284 00:14:39.284 NVM Command Set Attributes 00:14:39.284 ========================== 00:14:39.284 Submission Queue Entry Size 00:14:39.284 Max: 64 00:14:39.284 Min: 64 00:14:39.284 Completion Queue Entry Size 00:14:39.284 Max: 16 00:14:39.284 Min: 16 00:14:39.284 Number of Namespaces: 32 00:14:39.284 Compare Command: Supported 00:14:39.284 Write Uncorrectable Command: Not Supported 00:14:39.284 Dataset Management Command: Supported 00:14:39.284 Write Zeroes Command: Supported 00:14:39.284 Set Features Save Field: Not Supported 00:14:39.284 Reservations: Supported 00:14:39.284 Timestamp: Not Supported 00:14:39.284 Copy: Supported 00:14:39.284 Volatile Write Cache: Present 00:14:39.284 Atomic Write Unit (Normal): 1 00:14:39.284 Atomic Write Unit (PFail): 1 00:14:39.284 Atomic Compare & Write Unit: 1 00:14:39.284 Fused Compare & Write: Supported 00:14:39.284 Scatter-Gather List 00:14:39.284 SGL Command Set: Supported 00:14:39.284 SGL Keyed: Supported 00:14:39.284 SGL Bit Bucket Descriptor: Not Supported 00:14:39.284 SGL Metadata Pointer: Not Supported 00:14:39.284 Oversized SGL: Not Supported 00:14:39.284 SGL Metadata Address: Not Supported 00:14:39.284 SGL Offset: Supported 00:14:39.284 Transport SGL Data Block: Not Supported 00:14:39.284 Replay Protected Memory Block: Not Supported 00:14:39.284 00:14:39.284 Firmware Slot Information 00:14:39.284 ========================= 00:14:39.284 Active slot: 1 00:14:39.284 Slot 1 Firmware Revision: 24.01.1 00:14:39.284 00:14:39.284 00:14:39.284 Commands Supported and Effects 00:14:39.284 ============================== 00:14:39.284 Admin Commands 00:14:39.284 -------------- 00:14:39.284 Get Log Page (02h): Supported 00:14:39.284 Identify (06h): Supported 00:14:39.284 Abort (08h): Supported 00:14:39.284 Set Features (09h): Supported 00:14:39.284 Get Features (0Ah): Supported 00:14:39.284 Asynchronous Event Request (0Ch): Supported 00:14:39.284 Keep Alive (18h): Supported 00:14:39.284 I/O Commands 00:14:39.284 ------------ 00:14:39.284 Flush (00h): Supported LBA-Change 00:14:39.284 Write (01h): Supported LBA-Change 00:14:39.284 Read (02h): Supported 00:14:39.284 Compare (05h): Supported 00:14:39.284 Write Zeroes (08h): Supported LBA-Change 00:14:39.284 Dataset Management (09h): Supported LBA-Change 00:14:39.284 Copy (19h): Supported LBA-Change 00:14:39.284 Unknown (79h): Supported LBA-Change 00:14:39.284 Unknown (7Ah): Supported 00:14:39.284 00:14:39.284 Error Log 00:14:39.284 ========= 00:14:39.284 00:14:39.284 Arbitration 00:14:39.284 =========== 00:14:39.284 Arbitration Burst: 1 00:14:39.284 00:14:39.284 Power Management 00:14:39.284 ================ 00:14:39.284 Number of Power States: 1 00:14:39.284 Current Power State: Power State #0 00:14:39.284 Power State #0: 00:14:39.285 Max Power: 0.00 W 00:14:39.285 Non-Operational State: Operational 00:14:39.285 Entry Latency: Not Reported 00:14:39.285 Exit Latency: Not Reported 00:14:39.285 Relative Read Throughput: 0 00:14:39.285 Relative Read Latency: 0 00:14:39.285 Relative Write Throughput: 0 00:14:39.285 Relative Write Latency: 0 00:14:39.285 Idle Power: Not Reported 00:14:39.285 Active Power: Not Reported 00:14:39.285 Non-Operational Permissive Mode: Not Supported 00:14:39.285 00:14:39.285 Health Information 00:14:39.285 ================== 00:14:39.285 Critical Warnings: 00:14:39.285 Available Spare Space: OK 00:14:39.285 Temperature: OK 00:14:39.285 Device Reliability: OK 00:14:39.285 Read Only: No 00:14:39.285 Volatile Memory Backup: OK 00:14:39.285 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:39.285 Temperature Threshold: [2024-11-17 18:23:37.440069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.440073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1bc0) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.440187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.440194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.440198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.440206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.440229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1bc0, cid 7, qid 0 00:14:39.285 [2024-11-17 18:23:37.440278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.444327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.444337] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1bc0) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.444383] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:39.285 [2024-11-17 18:23:37.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.285 [2024-11-17 18:23:37.444409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.285 [2024-11-17 18:23:37.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.285 [2024-11-17 18:23:37.444422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.285 [2024-11-17 18:23:37.444432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444437] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.444450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.444478] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.285 [2024-11-17 18:23:37.444533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.444540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.444545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.444557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.444574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.444596] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.285 [2024-11-17 18:23:37.444675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.444682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.444686] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.444711] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:39.285 [2024-11-17 18:23:37.444716] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:39.285 [2024-11-17 18:23:37.444726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.444743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.444760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.285 [2024-11-17 18:23:37.444805] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.444812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.444816] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444820] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.444832] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444836] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444840] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.444848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.444865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.285 [2024-11-17 18:23:37.444912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.444919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.444923] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444927] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.444938] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.444947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.444954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.444971] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.285 [2024-11-17 18:23:37.445021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.445028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.445032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.445036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.445047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.445051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.445055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.445063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.285 [2024-11-17 18:23:37.445079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.285 [2024-11-17 18:23:37.445125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.285 [2024-11-17 18:23:37.445132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.285 [2024-11-17 18:23:37.445136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.445140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.285 [2024-11-17 18:23:37.445151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.445156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.285 [2024-11-17 18:23:37.445159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.285 [2024-11-17 18:23:37.445167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445364] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445392] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445428] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445491] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445511] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445612] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445693] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445742] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445751] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445834] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.445934] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.445940] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.445944] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.445959] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.445968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.445975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.445992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.446043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.446055] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.446060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.446075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.446091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.446109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.446160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.446168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.446172] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.446187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.446203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.446220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.446265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.446284] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.446290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446295] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.446323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.446341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.446361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.446408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.446415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.446419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.446434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.286 [2024-11-17 18:23:37.446451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.286 [2024-11-17 18:23:37.446469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.286 [2024-11-17 18:23:37.446529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.286 [2024-11-17 18:23:37.446542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.286 [2024-11-17 18:23:37.446547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.286 [2024-11-17 18:23:37.446563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.286 [2024-11-17 18:23:37.446573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.446581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.446600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.446653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.446661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.446665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.446681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446686] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446690] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.446697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.446715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.446761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.446768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.446772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446776] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.446787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.446804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.446822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.446884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.446891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.446895] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.446910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.446919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.446926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.446943] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.446991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.446998] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447049] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447094] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447354] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447359] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447375] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447461] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447465] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447599] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447604] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447635] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447685] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447697] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447702] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447706] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447727] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447828] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.287 [2024-11-17 18:23:37.447860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.287 [2024-11-17 18:23:37.447877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.287 [2024-11-17 18:23:37.447922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.287 [2024-11-17 18:23:37.447929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.287 [2024-11-17 18:23:37.447933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.287 [2024-11-17 18:23:37.447938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.287 [2024-11-17 18:23:37.447948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.447953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.447957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.288 [2024-11-17 18:23:37.447964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.288 [2024-11-17 18:23:37.447981] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.288 [2024-11-17 18:23:37.448032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.288 [2024-11-17 18:23:37.448038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.288 [2024-11-17 18:23:37.448042] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448047] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.288 [2024-11-17 18:23:37.448057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.288 [2024-11-17 18:23:37.448073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.288 [2024-11-17 18:23:37.448090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.288 [2024-11-17 18:23:37.448143] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.288 [2024-11-17 18:23:37.448150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.288 [2024-11-17 18:23:37.448154] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.288 [2024-11-17 18:23:37.448169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.288 [2024-11-17 18:23:37.448185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.288 [2024-11-17 18:23:37.448201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.288 [2024-11-17 18:23:37.448249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.288 [2024-11-17 18:23:37.448256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.288 [2024-11-17 18:23:37.448260] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.448264] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.288 [2024-11-17 18:23:37.452287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.452307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.452312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x668540) 00:14:39.288 [2024-11-17 18:23:37.452337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.288 [2024-11-17 18:23:37.452364] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1640, cid 3, qid 0 00:14:39.288 [2024-11-17 18:23:37.452417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.288 [2024-11-17 18:23:37.452425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.288 [2024-11-17 18:23:37.452429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.288 [2024-11-17 18:23:37.452433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a1640) on tqpair=0x668540 00:14:39.288 [2024-11-17 18:23:37.452442] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:39.288 0 Kelvin (-273 Celsius) 00:14:39.288 Available Spare: 0% 00:14:39.288 Available Spare Threshold: 0% 00:14:39.288 Life Percentage Used: 0% 00:14:39.288 Data Units Read: 0 00:14:39.288 Data Units Written: 0 00:14:39.288 Host Read Commands: 0 00:14:39.288 Host Write Commands: 0 00:14:39.288 Controller Busy Time: 0 minutes 00:14:39.288 Power Cycles: 0 00:14:39.288 Power On Hours: 0 hours 00:14:39.288 Unsafe Shutdowns: 0 00:14:39.288 Unrecoverable Media Errors: 0 00:14:39.288 Lifetime Error Log Entries: 0 00:14:39.288 Warning Temperature Time: 0 minutes 00:14:39.288 Critical Temperature Time: 0 minutes 00:14:39.288 00:14:39.288 Number of Queues 00:14:39.288 ================ 00:14:39.288 Number of I/O Submission Queues: 127 00:14:39.288 Number of I/O Completion Queues: 127 00:14:39.288 00:14:39.288 Active Namespaces 00:14:39.288 ================= 00:14:39.288 Namespace ID:1 00:14:39.288 Error Recovery Timeout: Unlimited 00:14:39.288 Command Set Identifier: NVM (00h) 00:14:39.288 Deallocate: Supported 00:14:39.288 Deallocated/Unwritten Error: Not Supported 00:14:39.288 Deallocated Read Value: Unknown 00:14:39.288 Deallocate in Write Zeroes: Not Supported 00:14:39.288 Deallocated Guard Field: 0xFFFF 00:14:39.288 Flush: Supported 00:14:39.288 Reservation: Supported 00:14:39.288 Namespace Sharing Capabilities: Multiple Controllers 00:14:39.288 Size (in LBAs): 131072 (0GiB) 00:14:39.288 Capacity (in LBAs): 131072 (0GiB) 00:14:39.288 Utilization (in LBAs): 131072 (0GiB) 00:14:39.288 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:39.288 EUI64: ABCDEF0123456789 00:14:39.288 UUID: 8ab80fc7-8de9-4293-86c9-0097e9146ba0 00:14:39.288 Thin Provisioning: Not Supported 00:14:39.288 Per-NS Atomic Units: Yes 00:14:39.288 Atomic Boundary Size (Normal): 0 00:14:39.288 Atomic Boundary Size (PFail): 0 00:14:39.288 Atomic Boundary Offset: 0 00:14:39.288 Maximum Single Source Range Length: 65535 00:14:39.288 Maximum Copy Length: 65535 00:14:39.288 Maximum Source Range Count: 1 00:14:39.288 NGUID/EUI64 Never Reused: No 00:14:39.288 Namespace Write Protected: No 00:14:39.288 Number of LBA Formats: 1 00:14:39.288 Current LBA Format: LBA Format #00 00:14:39.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:39.288 00:14:39.288 18:23:37 -- host/identify.sh@51 -- # sync 00:14:39.288 18:23:37 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.288 18:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.288 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.288 18:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.288 18:23:37 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:39.288 18:23:37 -- host/identify.sh@56 -- # nvmftestfini 00:14:39.288 18:23:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:39.288 18:23:37 -- nvmf/common.sh@116 -- # sync 00:14:39.548 18:23:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:39.548 18:23:37 -- nvmf/common.sh@119 -- # set +e 00:14:39.548 18:23:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:39.548 18:23:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:39.548 rmmod nvme_tcp 00:14:39.548 rmmod nvme_fabrics 00:14:39.548 rmmod nvme_keyring 00:14:39.548 18:23:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:39.548 18:23:37 -- nvmf/common.sh@123 -- # set -e 00:14:39.548 18:23:37 -- nvmf/common.sh@124 -- # return 0 00:14:39.548 18:23:37 -- nvmf/common.sh@477 -- # '[' -n 79961 ']' 00:14:39.548 18:23:37 -- nvmf/common.sh@478 -- # killprocess 79961 00:14:39.548 18:23:37 -- common/autotest_common.sh@936 -- # '[' -z 79961 ']' 00:14:39.548 18:23:37 -- common/autotest_common.sh@940 -- # kill -0 79961 00:14:39.548 18:23:37 -- common/autotest_common.sh@941 -- # uname 00:14:39.548 18:23:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.548 18:23:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79961 00:14:39.548 killing process with pid 79961 00:14:39.548 18:23:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:39.548 18:23:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:39.548 18:23:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79961' 00:14:39.548 18:23:37 -- common/autotest_common.sh@955 -- # kill 79961 00:14:39.548 [2024-11-17 18:23:37.628044] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:39.548 18:23:37 -- common/autotest_common.sh@960 -- # wait 79961 00:14:39.548 18:23:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:39.548 18:23:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:39.548 18:23:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:39.548 18:23:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.807 18:23:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:39.807 18:23:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.807 18:23:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.807 18:23:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.807 18:23:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:39.807 00:14:39.807 real 0m2.604s 00:14:39.807 user 0m7.241s 00:14:39.807 sys 0m0.591s 00:14:39.807 18:23:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:39.807 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.807 ************************************ 00:14:39.807 END TEST nvmf_identify 00:14:39.807 ************************************ 00:14:39.807 18:23:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:39.807 18:23:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:39.807 18:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.807 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.807 ************************************ 00:14:39.807 START TEST nvmf_perf 00:14:39.807 ************************************ 00:14:39.807 18:23:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:39.807 * Looking for test storage... 00:14:39.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:39.807 18:23:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:39.807 18:23:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:39.807 18:23:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:39.807 18:23:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:39.807 18:23:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:39.807 18:23:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:39.807 18:23:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:39.807 18:23:38 -- scripts/common.sh@335 -- # IFS=.-: 00:14:39.807 18:23:38 -- scripts/common.sh@335 -- # read -ra ver1 00:14:39.807 18:23:38 -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.807 18:23:38 -- scripts/common.sh@336 -- # read -ra ver2 00:14:39.807 18:23:38 -- scripts/common.sh@337 -- # local 'op=<' 00:14:39.807 18:23:38 -- scripts/common.sh@339 -- # ver1_l=2 00:14:39.807 18:23:38 -- scripts/common.sh@340 -- # ver2_l=1 00:14:39.807 18:23:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:39.807 18:23:38 -- scripts/common.sh@343 -- # case "$op" in 00:14:39.807 18:23:38 -- scripts/common.sh@344 -- # : 1 00:14:39.807 18:23:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:39.807 18:23:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.807 18:23:38 -- scripts/common.sh@364 -- # decimal 1 00:14:39.807 18:23:38 -- scripts/common.sh@352 -- # local d=1 00:14:39.807 18:23:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.807 18:23:38 -- scripts/common.sh@354 -- # echo 1 00:14:39.807 18:23:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:39.807 18:23:38 -- scripts/common.sh@365 -- # decimal 2 00:14:39.807 18:23:38 -- scripts/common.sh@352 -- # local d=2 00:14:39.807 18:23:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.807 18:23:38 -- scripts/common.sh@354 -- # echo 2 00:14:39.807 18:23:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:39.807 18:23:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:39.807 18:23:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:39.807 18:23:38 -- scripts/common.sh@367 -- # return 0 00:14:39.807 18:23:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.807 18:23:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.807 --rc genhtml_branch_coverage=1 00:14:39.807 --rc genhtml_function_coverage=1 00:14:39.807 --rc genhtml_legend=1 00:14:39.807 --rc geninfo_all_blocks=1 00:14:39.807 --rc geninfo_unexecuted_blocks=1 00:14:39.807 00:14:39.807 ' 00:14:39.807 18:23:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.807 --rc genhtml_branch_coverage=1 00:14:39.807 --rc genhtml_function_coverage=1 00:14:39.807 --rc genhtml_legend=1 00:14:39.807 --rc geninfo_all_blocks=1 00:14:39.807 --rc geninfo_unexecuted_blocks=1 00:14:39.807 00:14:39.807 ' 00:14:39.807 18:23:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.807 --rc genhtml_branch_coverage=1 00:14:39.807 --rc genhtml_function_coverage=1 00:14:39.807 --rc genhtml_legend=1 00:14:39.807 --rc geninfo_all_blocks=1 00:14:39.807 --rc geninfo_unexecuted_blocks=1 00:14:39.807 00:14:39.807 ' 00:14:39.807 18:23:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.807 --rc genhtml_branch_coverage=1 00:14:39.807 --rc genhtml_function_coverage=1 00:14:39.807 --rc genhtml_legend=1 00:14:39.807 --rc geninfo_all_blocks=1 00:14:39.807 --rc geninfo_unexecuted_blocks=1 00:14:39.807 00:14:39.807 ' 00:14:39.807 18:23:38 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.807 18:23:38 -- nvmf/common.sh@7 -- # uname -s 00:14:39.807 18:23:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.807 18:23:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.807 18:23:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.807 18:23:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.807 18:23:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.807 18:23:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.807 18:23:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.807 18:23:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.807 18:23:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.066 18:23:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.067 18:23:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:14:40.067 18:23:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:14:40.067 18:23:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.067 18:23:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.067 18:23:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.067 18:23:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.067 18:23:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.067 18:23:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.067 18:23:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.067 18:23:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.067 18:23:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.067 18:23:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.067 18:23:38 -- paths/export.sh@5 -- # export PATH 00:14:40.067 18:23:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.067 18:23:38 -- nvmf/common.sh@46 -- # : 0 00:14:40.067 18:23:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:40.067 18:23:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:40.067 18:23:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:40.067 18:23:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.067 18:23:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.067 18:23:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:40.067 18:23:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:40.067 18:23:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:40.067 18:23:38 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:40.067 18:23:38 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:40.067 18:23:38 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.067 18:23:38 -- host/perf.sh@17 -- # nvmftestinit 00:14:40.067 18:23:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:40.067 18:23:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.067 18:23:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:40.067 18:23:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:40.067 18:23:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:40.067 18:23:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.067 18:23:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.067 18:23:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.067 18:23:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:40.067 18:23:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:40.067 18:23:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:40.067 18:23:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:40.067 18:23:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:40.067 18:23:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:40.067 18:23:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.067 18:23:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.067 18:23:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:40.067 18:23:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:40.067 18:23:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.067 18:23:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.067 18:23:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.067 18:23:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.067 18:23:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.067 18:23:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.067 18:23:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.067 18:23:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.067 18:23:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:40.067 18:23:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:40.067 Cannot find device "nvmf_tgt_br" 00:14:40.067 18:23:38 -- nvmf/common.sh@154 -- # true 00:14:40.067 18:23:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.067 Cannot find device "nvmf_tgt_br2" 00:14:40.067 18:23:38 -- nvmf/common.sh@155 -- # true 00:14:40.067 18:23:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:40.067 18:23:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:40.067 Cannot find device "nvmf_tgt_br" 00:14:40.067 18:23:38 -- nvmf/common.sh@157 -- # true 00:14:40.067 18:23:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:40.067 Cannot find device "nvmf_tgt_br2" 00:14:40.067 18:23:38 -- nvmf/common.sh@158 -- # true 00:14:40.067 18:23:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:40.067 18:23:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:40.067 18:23:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.067 18:23:38 -- nvmf/common.sh@161 -- # true 00:14:40.067 18:23:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.067 18:23:38 -- nvmf/common.sh@162 -- # true 00:14:40.067 18:23:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.067 18:23:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.067 18:23:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.067 18:23:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.067 18:23:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.067 18:23:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.067 18:23:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.067 18:23:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:40.067 18:23:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:40.067 18:23:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:40.067 18:23:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:40.067 18:23:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:40.067 18:23:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:40.067 18:23:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:40.327 18:23:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.327 18:23:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.327 18:23:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:40.327 18:23:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:40.327 18:23:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.327 18:23:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.327 18:23:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.327 18:23:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.327 18:23:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.327 18:23:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:40.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:40.327 00:14:40.327 --- 10.0.0.2 ping statistics --- 00:14:40.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.327 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:40.327 18:23:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:40.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:14:40.327 00:14:40.327 --- 10.0.0.3 ping statistics --- 00:14:40.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.327 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:14:40.327 18:23:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:14:40.327 00:14:40.327 --- 10.0.0.1 ping statistics --- 00:14:40.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.327 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:40.327 18:23:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.327 18:23:38 -- nvmf/common.sh@421 -- # return 0 00:14:40.327 18:23:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:40.327 18:23:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.327 18:23:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:40.327 18:23:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:40.327 18:23:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.327 18:23:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:40.327 18:23:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:40.327 18:23:38 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:40.327 18:23:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:40.327 18:23:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.327 18:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.327 18:23:38 -- nvmf/common.sh@469 -- # nvmfpid=80181 00:14:40.327 18:23:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:40.327 18:23:38 -- nvmf/common.sh@470 -- # waitforlisten 80181 00:14:40.327 18:23:38 -- common/autotest_common.sh@829 -- # '[' -z 80181 ']' 00:14:40.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.327 18:23:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.327 18:23:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.327 18:23:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.327 18:23:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.327 18:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.327 [2024-11-17 18:23:38.493526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:40.327 [2024-11-17 18:23:38.493611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.586 [2024-11-17 18:23:38.632161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.586 [2024-11-17 18:23:38.664649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:40.586 [2024-11-17 18:23:38.665040] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.586 [2024-11-17 18:23:38.665093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.586 [2024-11-17 18:23:38.665220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.586 [2024-11-17 18:23:38.665446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.586 [2024-11-17 18:23:38.665590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.586 [2024-11-17 18:23:38.665736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.586 [2024-11-17 18:23:38.665930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.522 18:23:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.522 18:23:39 -- common/autotest_common.sh@862 -- # return 0 00:14:41.522 18:23:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:41.522 18:23:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.522 18:23:39 -- common/autotest_common.sh@10 -- # set +x 00:14:41.522 18:23:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.522 18:23:39 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:41.522 18:23:39 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:41.782 18:23:39 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:41.782 18:23:39 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:42.040 18:23:40 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:42.041 18:23:40 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:42.300 18:23:40 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:42.300 18:23:40 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:42.300 18:23:40 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:42.300 18:23:40 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:42.300 18:23:40 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:42.558 [2024-11-17 18:23:40.697998] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.558 18:23:40 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:42.817 18:23:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:42.817 18:23:40 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.075 18:23:41 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:43.075 18:23:41 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:43.334 18:23:41 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.594 [2024-11-17 18:23:41.690042] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.594 18:23:41 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.853 18:23:41 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:43.853 18:23:41 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:43.853 18:23:41 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:43.853 18:23:41 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:44.789 Initializing NVMe Controllers 00:14:44.789 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:44.789 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:44.789 Initialization complete. Launching workers. 00:14:44.789 ======================================================== 00:14:44.789 Latency(us) 00:14:44.789 Device Information : IOPS MiB/s Average min max 00:14:44.789 PCIE (0000:00:06.0) NSID 1 from core 0: 22719.98 88.75 1412.69 325.14 7780.42 00:14:44.789 ======================================================== 00:14:44.789 Total : 22719.98 88.75 1412.69 325.14 7780.42 00:14:44.789 00:14:44.789 18:23:43 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:46.166 Initializing NVMe Controllers 00:14:46.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:46.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:46.166 Initialization complete. Launching workers. 00:14:46.166 ======================================================== 00:14:46.166 Latency(us) 00:14:46.166 Device Information : IOPS MiB/s Average min max 00:14:46.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3555.99 13.89 279.75 99.15 7224.89 00:14:46.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.38 5950.90 15016.62 00:14:46.166 ======================================================== 00:14:46.166 Total : 3679.99 14.37 543.68 99.15 15016.62 00:14:46.166 00:14:46.166 18:23:44 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:47.543 Initializing NVMe Controllers 00:14:47.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:47.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:47.543 Initialization complete. Launching workers. 00:14:47.543 ======================================================== 00:14:47.543 Latency(us) 00:14:47.543 Device Information : IOPS MiB/s Average min max 00:14:47.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8804.68 34.39 3635.68 450.08 7594.21 00:14:47.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4003.03 15.64 8029.09 6309.08 16316.51 00:14:47.543 ======================================================== 00:14:47.543 Total : 12807.71 50.03 5008.83 450.08 16316.51 00:14:47.543 00:14:47.543 18:23:45 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:47.543 18:23:45 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:50.081 Initializing NVMe Controllers 00:14:50.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.081 Controller IO queue size 128, less than required. 00:14:50.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.081 Controller IO queue size 128, less than required. 00:14:50.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:50.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:50.081 Initialization complete. Launching workers. 00:14:50.081 ======================================================== 00:14:50.081 Latency(us) 00:14:50.081 Device Information : IOPS MiB/s Average min max 00:14:50.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1907.44 476.86 68555.43 33904.07 126372.40 00:14:50.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 635.98 158.99 209751.30 100015.61 372747.85 00:14:50.081 ======================================================== 00:14:50.081 Total : 2543.42 635.85 103861.34 33904.07 372747.85 00:14:50.081 00:14:50.081 18:23:48 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:50.341 No valid NVMe controllers or AIO or URING devices found 00:14:50.341 Initializing NVMe Controllers 00:14:50.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.341 Controller IO queue size 128, less than required. 00:14:50.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:50.341 Controller IO queue size 128, less than required. 00:14:50.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:50.341 WARNING: Some requested NVMe devices were skipped 00:14:50.341 18:23:48 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:52.873 Initializing NVMe Controllers 00:14:52.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.873 Controller IO queue size 128, less than required. 00:14:52.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.873 Controller IO queue size 128, less than required. 00:14:52.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:52.873 Initialization complete. Launching workers. 00:14:52.873 00:14:52.873 ==================== 00:14:52.873 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:52.873 TCP transport: 00:14:52.873 polls: 7343 00:14:52.873 idle_polls: 0 00:14:52.873 sock_completions: 7343 00:14:52.873 nvme_completions: 6745 00:14:52.873 submitted_requests: 10269 00:14:52.873 queued_requests: 1 00:14:52.873 00:14:52.873 ==================== 00:14:52.873 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:52.873 TCP transport: 00:14:52.873 polls: 7402 00:14:52.873 idle_polls: 0 00:14:52.873 sock_completions: 7402 00:14:52.873 nvme_completions: 6555 00:14:52.873 submitted_requests: 9960 00:14:52.873 queued_requests: 1 00:14:52.873 ======================================================== 00:14:52.873 Latency(us) 00:14:52.873 Device Information : IOPS MiB/s Average min max 00:14:52.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.89 437.47 74266.26 38010.45 133301.59 00:14:52.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1702.39 425.60 76294.19 34872.25 128187.14 00:14:52.873 ======================================================== 00:14:52.873 Total : 3452.28 863.07 75266.27 34872.25 133301.59 00:14:52.873 00:14:52.873 18:23:51 -- host/perf.sh@66 -- # sync 00:14:52.873 18:23:51 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.132 18:23:51 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:53.132 18:23:51 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:53.132 18:23:51 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:53.390 18:23:51 -- host/perf.sh@72 -- # ls_guid=62b25fa8-e346-4c00-ae14-84962ab4b4fe 00:14:53.390 18:23:51 -- host/perf.sh@73 -- # get_lvs_free_mb 62b25fa8-e346-4c00-ae14-84962ab4b4fe 00:14:53.390 18:23:51 -- common/autotest_common.sh@1353 -- # local lvs_uuid=62b25fa8-e346-4c00-ae14-84962ab4b4fe 00:14:53.390 18:23:51 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:53.390 18:23:51 -- common/autotest_common.sh@1355 -- # local fc 00:14:53.390 18:23:51 -- common/autotest_common.sh@1356 -- # local cs 00:14:53.390 18:23:51 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:53.649 18:23:51 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:53.649 { 00:14:53.649 "uuid": "62b25fa8-e346-4c00-ae14-84962ab4b4fe", 00:14:53.649 "name": "lvs_0", 00:14:53.649 "base_bdev": "Nvme0n1", 00:14:53.649 "total_data_clusters": 1278, 00:14:53.649 "free_clusters": 1278, 00:14:53.649 "block_size": 4096, 00:14:53.649 "cluster_size": 4194304 00:14:53.649 } 00:14:53.649 ]' 00:14:53.649 18:23:51 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="62b25fa8-e346-4c00-ae14-84962ab4b4fe") .free_clusters' 00:14:53.907 18:23:51 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:53.907 18:23:51 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="62b25fa8-e346-4c00-ae14-84962ab4b4fe") .cluster_size' 00:14:53.907 5112 00:14:53.907 18:23:51 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:53.907 18:23:51 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:53.907 18:23:51 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:53.907 18:23:51 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:53.907 18:23:51 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62b25fa8-e346-4c00-ae14-84962ab4b4fe lbd_0 5112 00:14:54.166 18:23:52 -- host/perf.sh@80 -- # lb_guid=1b1787f4-ca7a-4cc1-837e-98519133df04 00:14:54.166 18:23:52 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1b1787f4-ca7a-4cc1-837e-98519133df04 lvs_n_0 00:14:54.424 18:23:52 -- host/perf.sh@83 -- # ls_nested_guid=d139312a-c509-43eb-b3cc-41818fe817f1 00:14:54.424 18:23:52 -- host/perf.sh@84 -- # get_lvs_free_mb d139312a-c509-43eb-b3cc-41818fe817f1 00:14:54.424 18:23:52 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d139312a-c509-43eb-b3cc-41818fe817f1 00:14:54.424 18:23:52 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:54.424 18:23:52 -- common/autotest_common.sh@1355 -- # local fc 00:14:54.424 18:23:52 -- common/autotest_common.sh@1356 -- # local cs 00:14:54.424 18:23:52 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:54.682 18:23:52 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:54.682 { 00:14:54.682 "uuid": "62b25fa8-e346-4c00-ae14-84962ab4b4fe", 00:14:54.682 "name": "lvs_0", 00:14:54.682 "base_bdev": "Nvme0n1", 00:14:54.682 "total_data_clusters": 1278, 00:14:54.682 "free_clusters": 0, 00:14:54.682 "block_size": 4096, 00:14:54.682 "cluster_size": 4194304 00:14:54.682 }, 00:14:54.682 { 00:14:54.682 "uuid": "d139312a-c509-43eb-b3cc-41818fe817f1", 00:14:54.682 "name": "lvs_n_0", 00:14:54.682 "base_bdev": "1b1787f4-ca7a-4cc1-837e-98519133df04", 00:14:54.682 "total_data_clusters": 1276, 00:14:54.682 "free_clusters": 1276, 00:14:54.682 "block_size": 4096, 00:14:54.682 "cluster_size": 4194304 00:14:54.682 } 00:14:54.682 ]' 00:14:54.682 18:23:52 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d139312a-c509-43eb-b3cc-41818fe817f1") .free_clusters' 00:14:54.682 18:23:52 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:54.682 18:23:52 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d139312a-c509-43eb-b3cc-41818fe817f1") .cluster_size' 00:14:54.941 5104 00:14:54.941 18:23:52 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:54.941 18:23:52 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:54.941 18:23:52 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:54.941 18:23:52 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:54.941 18:23:52 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d139312a-c509-43eb-b3cc-41818fe817f1 lbd_nest_0 5104 00:14:54.941 18:23:53 -- host/perf.sh@88 -- # lb_nested_guid=7709b63c-5111-44e8-b453-36c78cc476bd 00:14:54.941 18:23:53 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.198 18:23:53 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:55.198 18:23:53 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7709b63c-5111-44e8-b453-36c78cc476bd 00:14:55.455 18:23:53 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.713 18:23:53 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:55.713 18:23:53 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:55.713 18:23:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:55.713 18:23:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:55.713 18:23:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:55.970 No valid NVMe controllers or AIO or URING devices found 00:14:56.227 Initializing NVMe Controllers 00:14:56.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.227 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:56.227 WARNING: Some requested NVMe devices were skipped 00:14:56.227 18:23:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:56.227 18:23:54 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:06.214 Initializing NVMe Controllers 00:15:06.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:06.214 Initialization complete. Launching workers. 00:15:06.214 ======================================================== 00:15:06.214 Latency(us) 00:15:06.214 Device Information : IOPS MiB/s Average min max 00:15:06.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 928.70 116.09 1076.70 332.96 8385.66 00:15:06.214 ======================================================== 00:15:06.214 Total : 928.70 116.09 1076.70 332.96 8385.66 00:15:06.214 00:15:06.473 18:24:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:06.473 18:24:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:06.473 18:24:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:06.731 No valid NVMe controllers or AIO or URING devices found 00:15:06.731 Initializing NVMe Controllers 00:15:06.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.731 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:06.731 WARNING: Some requested NVMe devices were skipped 00:15:06.731 18:24:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:06.731 18:24:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:18.935 Initializing NVMe Controllers 00:15:18.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:18.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:18.935 Initialization complete. Launching workers. 00:15:18.935 ======================================================== 00:15:18.935 Latency(us) 00:15:18.935 Device Information : IOPS MiB/s Average min max 00:15:18.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1299.60 162.45 24644.56 5384.92 67641.86 00:15:18.935 ======================================================== 00:15:18.935 Total : 1299.60 162.45 24644.56 5384.92 67641.86 00:15:18.935 00:15:18.935 18:24:15 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:18.935 18:24:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:18.935 18:24:15 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:18.935 No valid NVMe controllers or AIO or URING devices found 00:15:18.935 Initializing NVMe Controllers 00:15:18.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:18.935 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:18.935 WARNING: Some requested NVMe devices were skipped 00:15:18.935 18:24:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:18.935 18:24:15 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:28.908 Initializing NVMe Controllers 00:15:28.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.908 Controller IO queue size 128, less than required. 00:15:28.908 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:28.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:28.908 Initialization complete. Launching workers. 00:15:28.908 ======================================================== 00:15:28.908 Latency(us) 00:15:28.908 Device Information : IOPS MiB/s Average min max 00:15:28.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4081.73 510.22 31412.07 12839.99 63803.46 00:15:28.908 ======================================================== 00:15:28.908 Total : 4081.73 510.22 31412.07 12839.99 63803.46 00:15:28.908 00:15:28.908 18:24:25 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.908 18:24:26 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7709b63c-5111-44e8-b453-36c78cc476bd 00:15:28.908 18:24:26 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:28.908 18:24:26 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1b1787f4-ca7a-4cc1-837e-98519133df04 00:15:28.908 18:24:26 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:29.166 18:24:27 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:29.166 18:24:27 -- host/perf.sh@114 -- # nvmftestfini 00:15:29.166 18:24:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:29.166 18:24:27 -- nvmf/common.sh@116 -- # sync 00:15:29.166 18:24:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:29.166 18:24:27 -- nvmf/common.sh@119 -- # set +e 00:15:29.166 18:24:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.166 18:24:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:29.166 rmmod nvme_tcp 00:15:29.166 rmmod nvme_fabrics 00:15:29.166 rmmod nvme_keyring 00:15:29.166 18:24:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.166 18:24:27 -- nvmf/common.sh@123 -- # set -e 00:15:29.166 18:24:27 -- nvmf/common.sh@124 -- # return 0 00:15:29.166 18:24:27 -- nvmf/common.sh@477 -- # '[' -n 80181 ']' 00:15:29.166 18:24:27 -- nvmf/common.sh@478 -- # killprocess 80181 00:15:29.166 18:24:27 -- common/autotest_common.sh@936 -- # '[' -z 80181 ']' 00:15:29.166 18:24:27 -- common/autotest_common.sh@940 -- # kill -0 80181 00:15:29.166 18:24:27 -- common/autotest_common.sh@941 -- # uname 00:15:29.166 18:24:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.166 18:24:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80181 00:15:29.166 killing process with pid 80181 00:15:29.166 18:24:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:29.166 18:24:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:29.166 18:24:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80181' 00:15:29.166 18:24:27 -- common/autotest_common.sh@955 -- # kill 80181 00:15:29.166 18:24:27 -- common/autotest_common.sh@960 -- # wait 80181 00:15:30.101 18:24:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:30.101 18:24:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:30.101 18:24:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:30.101 18:24:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.101 18:24:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:30.101 18:24:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.101 18:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.101 18:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.101 18:24:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:30.101 00:15:30.101 real 0m50.277s 00:15:30.101 user 3m9.269s 00:15:30.101 sys 0m12.448s 00:15:30.101 ************************************ 00:15:30.101 END TEST nvmf_perf 00:15:30.101 ************************************ 00:15:30.101 18:24:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:30.101 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.101 18:24:28 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:30.101 18:24:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.101 18:24:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.101 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.101 ************************************ 00:15:30.101 START TEST nvmf_fio_host 00:15:30.101 ************************************ 00:15:30.101 18:24:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:30.101 * Looking for test storage... 00:15:30.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.101 18:24:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:30.101 18:24:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:30.101 18:24:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:30.360 18:24:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:30.360 18:24:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:30.360 18:24:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:30.360 18:24:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:30.360 18:24:28 -- scripts/common.sh@335 -- # IFS=.-: 00:15:30.360 18:24:28 -- scripts/common.sh@335 -- # read -ra ver1 00:15:30.360 18:24:28 -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.360 18:24:28 -- scripts/common.sh@336 -- # read -ra ver2 00:15:30.360 18:24:28 -- scripts/common.sh@337 -- # local 'op=<' 00:15:30.360 18:24:28 -- scripts/common.sh@339 -- # ver1_l=2 00:15:30.360 18:24:28 -- scripts/common.sh@340 -- # ver2_l=1 00:15:30.360 18:24:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:30.360 18:24:28 -- scripts/common.sh@343 -- # case "$op" in 00:15:30.360 18:24:28 -- scripts/common.sh@344 -- # : 1 00:15:30.360 18:24:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:30.360 18:24:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.360 18:24:28 -- scripts/common.sh@364 -- # decimal 1 00:15:30.360 18:24:28 -- scripts/common.sh@352 -- # local d=1 00:15:30.360 18:24:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.360 18:24:28 -- scripts/common.sh@354 -- # echo 1 00:15:30.360 18:24:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:30.360 18:24:28 -- scripts/common.sh@365 -- # decimal 2 00:15:30.360 18:24:28 -- scripts/common.sh@352 -- # local d=2 00:15:30.360 18:24:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.360 18:24:28 -- scripts/common.sh@354 -- # echo 2 00:15:30.360 18:24:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:30.360 18:24:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:30.360 18:24:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:30.360 18:24:28 -- scripts/common.sh@367 -- # return 0 00:15:30.360 18:24:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.360 18:24:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:30.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.360 --rc genhtml_branch_coverage=1 00:15:30.360 --rc genhtml_function_coverage=1 00:15:30.360 --rc genhtml_legend=1 00:15:30.360 --rc geninfo_all_blocks=1 00:15:30.360 --rc geninfo_unexecuted_blocks=1 00:15:30.360 00:15:30.360 ' 00:15:30.360 18:24:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:30.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.360 --rc genhtml_branch_coverage=1 00:15:30.360 --rc genhtml_function_coverage=1 00:15:30.360 --rc genhtml_legend=1 00:15:30.360 --rc geninfo_all_blocks=1 00:15:30.360 --rc geninfo_unexecuted_blocks=1 00:15:30.360 00:15:30.360 ' 00:15:30.360 18:24:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:30.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.360 --rc genhtml_branch_coverage=1 00:15:30.360 --rc genhtml_function_coverage=1 00:15:30.360 --rc genhtml_legend=1 00:15:30.360 --rc geninfo_all_blocks=1 00:15:30.360 --rc geninfo_unexecuted_blocks=1 00:15:30.360 00:15:30.360 ' 00:15:30.360 18:24:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:30.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.360 --rc genhtml_branch_coverage=1 00:15:30.360 --rc genhtml_function_coverage=1 00:15:30.360 --rc genhtml_legend=1 00:15:30.360 --rc geninfo_all_blocks=1 00:15:30.360 --rc geninfo_unexecuted_blocks=1 00:15:30.360 00:15:30.360 ' 00:15:30.360 18:24:28 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.360 18:24:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.360 18:24:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.360 18:24:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.361 18:24:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- paths/export.sh@5 -- # export PATH 00:15:30.361 18:24:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.361 18:24:28 -- nvmf/common.sh@7 -- # uname -s 00:15:30.361 18:24:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.361 18:24:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.361 18:24:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.361 18:24:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.361 18:24:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.361 18:24:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.361 18:24:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.361 18:24:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.361 18:24:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.361 18:24:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.361 18:24:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:15:30.361 18:24:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:15:30.361 18:24:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.361 18:24:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.361 18:24:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.361 18:24:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.361 18:24:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.361 18:24:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.361 18:24:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.361 18:24:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- paths/export.sh@5 -- # export PATH 00:15:30.361 18:24:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.361 18:24:28 -- nvmf/common.sh@46 -- # : 0 00:15:30.361 18:24:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:30.361 18:24:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:30.361 18:24:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:30.361 18:24:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.361 18:24:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.361 18:24:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:30.361 18:24:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:30.361 18:24:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:30.361 18:24:28 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.361 18:24:28 -- host/fio.sh@14 -- # nvmftestinit 00:15:30.361 18:24:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:30.361 18:24:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.361 18:24:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:30.361 18:24:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:30.361 18:24:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:30.361 18:24:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.361 18:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.361 18:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.361 18:24:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:30.361 18:24:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:30.361 18:24:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:30.361 18:24:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:30.361 18:24:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:30.361 18:24:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:30.361 18:24:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.361 18:24:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.361 18:24:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.361 18:24:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:30.361 18:24:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.361 18:24:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.361 18:24:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.361 18:24:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.361 18:24:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.361 18:24:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.361 18:24:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.361 18:24:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.361 18:24:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:30.361 18:24:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:30.361 Cannot find device "nvmf_tgt_br" 00:15:30.361 18:24:28 -- nvmf/common.sh@154 -- # true 00:15:30.361 18:24:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.361 Cannot find device "nvmf_tgt_br2" 00:15:30.361 18:24:28 -- nvmf/common.sh@155 -- # true 00:15:30.361 18:24:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:30.361 18:24:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:30.361 Cannot find device "nvmf_tgt_br" 00:15:30.361 18:24:28 -- nvmf/common.sh@157 -- # true 00:15:30.361 18:24:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:30.361 Cannot find device "nvmf_tgt_br2" 00:15:30.361 18:24:28 -- nvmf/common.sh@158 -- # true 00:15:30.361 18:24:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:30.361 18:24:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:30.361 18:24:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.361 18:24:28 -- nvmf/common.sh@161 -- # true 00:15:30.361 18:24:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.361 18:24:28 -- nvmf/common.sh@162 -- # true 00:15:30.361 18:24:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.361 18:24:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.361 18:24:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.361 18:24:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.620 18:24:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.620 18:24:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.620 18:24:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.620 18:24:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.620 18:24:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.620 18:24:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:30.620 18:24:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:30.620 18:24:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:30.620 18:24:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:30.620 18:24:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.620 18:24:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.620 18:24:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.620 18:24:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:30.620 18:24:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:30.620 18:24:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.620 18:24:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.620 18:24:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.620 18:24:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.620 18:24:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.620 18:24:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:30.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:30.620 00:15:30.620 --- 10.0.0.2 ping statistics --- 00:15:30.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.620 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:30.620 18:24:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:30.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:30.620 00:15:30.620 --- 10.0.0.3 ping statistics --- 00:15:30.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.620 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:30.620 18:24:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:15:30.620 00:15:30.620 --- 10.0.0.1 ping statistics --- 00:15:30.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.620 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:30.620 18:24:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.620 18:24:28 -- nvmf/common.sh@421 -- # return 0 00:15:30.620 18:24:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:30.620 18:24:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.620 18:24:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:30.620 18:24:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:30.620 18:24:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.620 18:24:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:30.620 18:24:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:30.620 18:24:28 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:30.620 18:24:28 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:30.620 18:24:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.620 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.620 18:24:28 -- host/fio.sh@24 -- # nvmfpid=81012 00:15:30.620 18:24:28 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.620 18:24:28 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.620 18:24:28 -- host/fio.sh@28 -- # waitforlisten 81012 00:15:30.620 18:24:28 -- common/autotest_common.sh@829 -- # '[' -z 81012 ']' 00:15:30.620 18:24:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.620 18:24:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.620 18:24:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.620 18:24:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.620 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.620 [2024-11-17 18:24:28.849161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:30.620 [2024-11-17 18:24:28.849484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.878 [2024-11-17 18:24:28.989507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.878 [2024-11-17 18:24:29.030695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:30.878 [2024-11-17 18:24:29.031144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.878 [2024-11-17 18:24:29.031311] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.878 [2024-11-17 18:24:29.031467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.878 [2024-11-17 18:24:29.031696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.878 [2024-11-17 18:24:29.031778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.878 [2024-11-17 18:24:29.031844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.878 [2024-11-17 18:24:29.031845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.809 18:24:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.809 18:24:29 -- common/autotest_common.sh@862 -- # return 0 00:15:31.809 18:24:29 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.066 [2024-11-17 18:24:30.124187] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.066 18:24:30 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:32.066 18:24:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.066 18:24:30 -- common/autotest_common.sh@10 -- # set +x 00:15:32.066 18:24:30 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:32.327 Malloc1 00:15:32.327 18:24:30 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.630 18:24:30 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:32.895 18:24:30 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.895 [2024-11-17 18:24:31.116850] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.895 18:24:31 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.152 18:24:31 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:33.152 18:24:31 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:33.152 18:24:31 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:33.152 18:24:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:33.152 18:24:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.152 18:24:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:33.152 18:24:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:33.152 18:24:31 -- common/autotest_common.sh@1330 -- # shift 00:15:33.152 18:24:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:33.152 18:24:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.152 18:24:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:33.152 18:24:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:33.152 18:24:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:33.152 18:24:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:33.152 18:24:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:33.152 18:24:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.153 18:24:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:33.153 18:24:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:33.153 18:24:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:33.153 18:24:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:33.153 18:24:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:33.153 18:24:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:33.153 18:24:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:33.409 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:33.409 fio-3.35 00:15:33.409 Starting 1 thread 00:15:35.933 00:15:35.933 test: (groupid=0, jobs=1): err= 0: pid=81095: Sun Nov 17 18:24:33 2024 00:15:35.933 read: IOPS=9481, BW=37.0MiB/s (38.8MB/s)(74.3MiB/2006msec) 00:15:35.933 slat (nsec): min=1907, max=426300, avg=2457.93, stdev=3846.80 00:15:35.933 clat (usec): min=2662, max=12701, avg=7034.95, stdev=526.84 00:15:35.933 lat (usec): min=2695, max=12704, avg=7037.41, stdev=526.65 00:15:35.933 clat percentiles (usec): 00:15:35.933 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:15:35.933 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7111], 00:15:35.933 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7832], 00:15:35.933 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[10552], 99.95th=[11469], 00:15:35.933 | 99.99th=[12649] 00:15:35.933 bw ( KiB/s): min=37648, max=38280, per=99.95%, avg=37909.00, stdev=267.92, samples=4 00:15:35.933 iops : min= 9412, max= 9570, avg=9477.25, stdev=66.98, samples=4 00:15:35.933 write: IOPS=9491, BW=37.1MiB/s (38.9MB/s)(74.4MiB/2006msec); 0 zone resets 00:15:35.933 slat (nsec): min=1989, max=241845, avg=2514.48, stdev=2303.38 00:15:35.933 clat (usec): min=2523, max=11753, avg=6427.81, stdev=489.74 00:15:35.933 lat (usec): min=2537, max=11755, avg=6430.32, stdev=489.75 00:15:35.933 clat percentiles (usec): 00:15:35.933 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:15:35.933 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:15:35.933 | 70.00th=[ 6652], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:15:35.933 | 99.00th=[ 7570], 99.50th=[ 8094], 99.90th=[10421], 99.95th=[11469], 00:15:35.933 | 99.99th=[11731] 00:15:35.933 bw ( KiB/s): min=37288, max=38360, per=99.88%, avg=37919.00, stdev=480.38, samples=4 00:15:35.933 iops : min= 9322, max= 9590, avg=9479.75, stdev=120.10, samples=4 00:15:35.933 lat (msec) : 4=0.15%, 10=99.72%, 20=0.13% 00:15:35.933 cpu : usr=69.48%, sys=22.69%, ctx=24, majf=0, minf=5 00:15:35.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:35.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:35.933 issued rwts: total=19020,19040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:35.933 00:15:35.933 Run status group 0 (all jobs): 00:15:35.933 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.3MiB (77.9MB), run=2006-2006msec 00:15:35.933 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.4MiB (78.0MB), run=2006-2006msec 00:15:35.933 18:24:33 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:35.933 18:24:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:35.933 18:24:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:35.933 18:24:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.933 18:24:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:35.933 18:24:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.933 18:24:33 -- common/autotest_common.sh@1330 -- # shift 00:15:35.933 18:24:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:35.933 18:24:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:35.933 18:24:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:35.933 18:24:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:35.933 18:24:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:35.933 18:24:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:35.933 18:24:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:35.933 18:24:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:35.933 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:35.933 fio-3.35 00:15:35.933 Starting 1 thread 00:15:38.456 00:15:38.456 test: (groupid=0, jobs=1): err= 0: pid=81138: Sun Nov 17 18:24:36 2024 00:15:38.456 read: IOPS=8696, BW=136MiB/s (142MB/s)(273MiB/2007msec) 00:15:38.456 slat (usec): min=2, max=135, avg= 3.76, stdev= 2.62 00:15:38.456 clat (usec): min=2109, max=16634, avg=8102.23, stdev=2559.85 00:15:38.456 lat (usec): min=2112, max=16637, avg=8105.99, stdev=2560.00 00:15:38.456 clat percentiles (usec): 00:15:38.456 | 1.00th=[ 3884], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5800], 00:15:38.456 | 30.00th=[ 6390], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8455], 00:15:38.456 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11469], 95.00th=[12780], 00:15:38.456 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16188], 99.95th=[16319], 00:15:38.456 | 99.99th=[16450] 00:15:38.456 bw ( KiB/s): min=65824, max=78048, per=51.76%, avg=72018.50, stdev=5433.34, samples=4 00:15:38.456 iops : min= 4114, max= 4878, avg=4501.00, stdev=339.48, samples=4 00:15:38.456 write: IOPS=5192, BW=81.1MiB/s (85.1MB/s)(147MiB/1809msec); 0 zone resets 00:15:38.456 slat (usec): min=32, max=365, avg=38.95, stdev= 9.74 00:15:38.456 clat (usec): min=5345, max=19792, avg=11648.28, stdev=2122.08 00:15:38.456 lat (usec): min=5380, max=19826, avg=11687.23, stdev=2122.70 00:15:38.456 clat percentiles (usec): 00:15:38.456 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:15:38.456 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:15:38.456 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[15664], 00:15:38.456 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:15:38.456 | 99.99th=[19792] 00:15:38.456 bw ( KiB/s): min=69216, max=81504, per=90.40%, avg=75105.00, stdev=5484.21, samples=4 00:15:38.456 iops : min= 4326, max= 5094, avg=4694.00, stdev=342.73, samples=4 00:15:38.456 lat (msec) : 4=0.83%, 10=55.96%, 20=43.21% 00:15:38.456 cpu : usr=82.80%, sys=12.56%, ctx=6, majf=0, minf=1 00:15:38.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:38.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.456 issued rwts: total=17453,9393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.456 00:15:38.456 Run status group 0 (all jobs): 00:15:38.456 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=273MiB (286MB), run=2007-2007msec 00:15:38.456 WRITE: bw=81.1MiB/s (85.1MB/s), 81.1MiB/s-81.1MiB/s (85.1MB/s-85.1MB/s), io=147MiB (154MB), run=1809-1809msec 00:15:38.456 18:24:36 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.456 18:24:36 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:38.456 18:24:36 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:38.456 18:24:36 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:38.456 18:24:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:38.456 18:24:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:38.456 18:24:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:38.456 18:24:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:38.456 18:24:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:38.456 18:24:36 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:38.456 18:24:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:38.456 18:24:36 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:38.714 Nvme0n1 00:15:38.971 18:24:36 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:38.971 18:24:37 -- host/fio.sh@53 -- # ls_guid=764061fb-8ec7-49ea-9a69-89d9167530ee 00:15:38.971 18:24:37 -- host/fio.sh@54 -- # get_lvs_free_mb 764061fb-8ec7-49ea-9a69-89d9167530ee 00:15:38.971 18:24:37 -- common/autotest_common.sh@1353 -- # local lvs_uuid=764061fb-8ec7-49ea-9a69-89d9167530ee 00:15:38.971 18:24:37 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:38.971 18:24:37 -- common/autotest_common.sh@1355 -- # local fc 00:15:38.972 18:24:37 -- common/autotest_common.sh@1356 -- # local cs 00:15:38.972 18:24:37 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:39.229 18:24:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:39.229 { 00:15:39.229 "uuid": "764061fb-8ec7-49ea-9a69-89d9167530ee", 00:15:39.229 "name": "lvs_0", 00:15:39.229 "base_bdev": "Nvme0n1", 00:15:39.229 "total_data_clusters": 4, 00:15:39.229 "free_clusters": 4, 00:15:39.229 "block_size": 4096, 00:15:39.229 "cluster_size": 1073741824 00:15:39.229 } 00:15:39.229 ]' 00:15:39.229 18:24:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="764061fb-8ec7-49ea-9a69-89d9167530ee") .free_clusters' 00:15:39.486 18:24:37 -- common/autotest_common.sh@1358 -- # fc=4 00:15:39.486 18:24:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="764061fb-8ec7-49ea-9a69-89d9167530ee") .cluster_size' 00:15:39.486 4096 00:15:39.486 18:24:37 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:39.486 18:24:37 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:39.486 18:24:37 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:39.486 18:24:37 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:39.744 18f056e2-0ebb-49ae-93aa-dece7ad61642 00:15:39.744 18:24:37 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:40.001 18:24:38 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:40.258 18:24:38 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:40.516 18:24:38 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:40.516 18:24:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:40.516 18:24:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:40.516 18:24:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.516 18:24:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:40.516 18:24:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.516 18:24:38 -- common/autotest_common.sh@1330 -- # shift 00:15:40.516 18:24:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:40.516 18:24:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:40.516 18:24:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:40.516 18:24:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:40.516 18:24:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:40.516 18:24:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:40.516 18:24:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:40.516 18:24:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:40.516 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:40.516 fio-3.35 00:15:40.516 Starting 1 thread 00:15:43.043 00:15:43.043 test: (groupid=0, jobs=1): err= 0: pid=81248: Sun Nov 17 18:24:40 2024 00:15:43.043 read: IOPS=6562, BW=25.6MiB/s (26.9MB/s)(51.5MiB/2008msec) 00:15:43.043 slat (nsec): min=1962, max=322326, avg=2627.82, stdev=3725.84 00:15:43.043 clat (usec): min=2963, max=17863, avg=10181.83, stdev=852.53 00:15:43.043 lat (usec): min=2973, max=17866, avg=10184.46, stdev=852.22 00:15:43.043 clat percentiles (usec): 00:15:43.044 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:15:43.044 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:15:43.044 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:15:43.044 | 99.00th=[11994], 99.50th=[12387], 99.90th=[16712], 99.95th=[16909], 00:15:43.044 | 99.99th=[17957] 00:15:43.044 bw ( KiB/s): min=25556, max=26728, per=99.84%, avg=26209.00, stdev=526.55, samples=4 00:15:43.044 iops : min= 6389, max= 6682, avg=6552.25, stdev=131.64, samples=4 00:15:43.044 write: IOPS=6572, BW=25.7MiB/s (26.9MB/s)(51.6MiB/2008msec); 0 zone resets 00:15:43.044 slat (nsec): min=1992, max=253745, avg=2716.61, stdev=2797.38 00:15:43.044 clat (usec): min=2411, max=17529, avg=9236.34, stdev=791.05 00:15:43.044 lat (usec): min=2424, max=17531, avg=9239.05, stdev=790.89 00:15:43.044 clat percentiles (usec): 00:15:43.044 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8586], 00:15:43.044 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:15:43.044 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:15:43.044 | 99.00th=[10945], 99.50th=[11207], 99.90th=[14746], 99.95th=[15926], 00:15:43.044 | 99.99th=[16909] 00:15:43.044 bw ( KiB/s): min=26096, max=26530, per=99.89%, avg=26260.50, stdev=203.01, samples=4 00:15:43.044 iops : min= 6524, max= 6632, avg=6565.00, stdev=50.53, samples=4 00:15:43.044 lat (msec) : 4=0.06%, 10=63.42%, 20=36.52% 00:15:43.044 cpu : usr=72.89%, sys=21.33%, ctx=6, majf=0, minf=5 00:15:43.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:43.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.044 issued rwts: total=13178,13197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.044 00:15:43.044 Run status group 0 (all jobs): 00:15:43.044 READ: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=51.5MiB (54.0MB), run=2008-2008msec 00:15:43.044 WRITE: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=51.6MiB (54.1MB), run=2008-2008msec 00:15:43.044 18:24:41 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:43.044 18:24:41 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:43.609 18:24:41 -- host/fio.sh@64 -- # ls_nested_guid=6762c5d4-5f38-446b-84e4-41f305f59186 00:15:43.609 18:24:41 -- host/fio.sh@65 -- # get_lvs_free_mb 6762c5d4-5f38-446b-84e4-41f305f59186 00:15:43.609 18:24:41 -- common/autotest_common.sh@1353 -- # local lvs_uuid=6762c5d4-5f38-446b-84e4-41f305f59186 00:15:43.609 18:24:41 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:43.609 18:24:41 -- common/autotest_common.sh@1355 -- # local fc 00:15:43.609 18:24:41 -- common/autotest_common.sh@1356 -- # local cs 00:15:43.609 18:24:41 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:43.609 18:24:41 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:43.609 { 00:15:43.609 "uuid": "764061fb-8ec7-49ea-9a69-89d9167530ee", 00:15:43.609 "name": "lvs_0", 00:15:43.609 "base_bdev": "Nvme0n1", 00:15:43.609 "total_data_clusters": 4, 00:15:43.609 "free_clusters": 0, 00:15:43.609 "block_size": 4096, 00:15:43.609 "cluster_size": 1073741824 00:15:43.609 }, 00:15:43.609 { 00:15:43.609 "uuid": "6762c5d4-5f38-446b-84e4-41f305f59186", 00:15:43.609 "name": "lvs_n_0", 00:15:43.609 "base_bdev": "18f056e2-0ebb-49ae-93aa-dece7ad61642", 00:15:43.609 "total_data_clusters": 1022, 00:15:43.609 "free_clusters": 1022, 00:15:43.609 "block_size": 4096, 00:15:43.609 "cluster_size": 4194304 00:15:43.609 } 00:15:43.609 ]' 00:15:43.609 18:24:41 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="6762c5d4-5f38-446b-84e4-41f305f59186") .free_clusters' 00:15:43.609 18:24:41 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:43.609 18:24:41 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="6762c5d4-5f38-446b-84e4-41f305f59186") .cluster_size' 00:15:43.866 18:24:41 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:43.866 18:24:41 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:43.866 4088 00:15:43.866 18:24:41 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:43.866 18:24:41 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:44.124 c7dacde9-d8d6-4360-bd70-a8586a93a232 00:15:44.124 18:24:42 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:44.382 18:24:42 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:44.382 18:24:42 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:44.640 18:24:42 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:44.640 18:24:42 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:44.640 18:24:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:44.640 18:24:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.640 18:24:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:44.640 18:24:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.640 18:24:42 -- common/autotest_common.sh@1330 -- # shift 00:15:44.640 18:24:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:44.640 18:24:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:44.640 18:24:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:44.640 18:24:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:44.640 18:24:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:44.898 18:24:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:44.898 18:24:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:44.898 18:24:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:44.898 18:24:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:44.898 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:44.898 fio-3.35 00:15:44.898 Starting 1 thread 00:15:47.444 00:15:47.444 test: (groupid=0, jobs=1): err= 0: pid=81326: Sun Nov 17 18:24:45 2024 00:15:47.444 read: IOPS=5864, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2010msec) 00:15:47.444 slat (nsec): min=1908, max=239242, avg=2748.01, stdev=3132.32 00:15:47.444 clat (usec): min=2985, max=20404, avg=11416.11, stdev=966.96 00:15:47.444 lat (usec): min=2990, max=20406, avg=11418.86, stdev=966.75 00:15:47.444 clat percentiles (usec): 00:15:47.444 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:15:47.444 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:15:47.444 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:15:47.444 | 99.00th=[13435], 99.50th=[13960], 99.90th=[17695], 99.95th=[20317], 00:15:47.444 | 99.99th=[20317] 00:15:47.444 bw ( KiB/s): min=22672, max=23784, per=99.96%, avg=23450.00, stdev=527.21, samples=4 00:15:47.444 iops : min= 5668, max= 5946, avg=5862.50, stdev=131.80, samples=4 00:15:47.444 write: IOPS=5855, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2010msec); 0 zone resets 00:15:47.444 slat (nsec): min=1972, max=178986, avg=2843.23, stdev=2360.27 00:15:47.444 clat (usec): min=1873, max=19003, avg=10351.38, stdev=920.49 00:15:47.444 lat (usec): min=1880, max=19006, avg=10354.22, stdev=920.45 00:15:47.444 clat percentiles (usec): 00:15:47.444 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:15:47.444 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:15:47.444 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:15:47.444 | 99.00th=[12387], 99.50th=[12911], 99.90th=[17433], 99.95th=[18744], 00:15:47.444 | 99.99th=[19006] 00:15:47.444 bw ( KiB/s): min=23232, max=23496, per=99.95%, avg=23410.00, stdev=122.96, samples=4 00:15:47.444 iops : min= 5808, max= 5874, avg=5852.50, stdev=30.74, samples=4 00:15:47.444 lat (msec) : 2=0.01%, 4=0.06%, 10=19.31%, 20=80.60%, 50=0.03% 00:15:47.444 cpu : usr=72.27%, sys=21.70%, ctx=4, majf=0, minf=5 00:15:47.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:47.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:47.444 issued rwts: total=11788,11769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:47.444 00:15:47.444 Run status group 0 (all jobs): 00:15:47.444 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.3MB), run=2010-2010msec 00:15:47.444 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2010-2010msec 00:15:47.444 18:24:45 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:47.444 18:24:45 -- host/fio.sh@74 -- # sync 00:15:47.444 18:24:45 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:47.702 18:24:45 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:47.959 18:24:46 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:48.217 18:24:46 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:48.475 18:24:46 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:49.410 18:24:47 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:49.410 18:24:47 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:49.410 18:24:47 -- host/fio.sh@86 -- # nvmftestfini 00:15:49.410 18:24:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:49.410 18:24:47 -- nvmf/common.sh@116 -- # sync 00:15:49.410 18:24:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:49.410 18:24:47 -- nvmf/common.sh@119 -- # set +e 00:15:49.410 18:24:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:49.410 18:24:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:49.410 rmmod nvme_tcp 00:15:49.410 rmmod nvme_fabrics 00:15:49.410 rmmod nvme_keyring 00:15:49.410 18:24:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:49.410 18:24:47 -- nvmf/common.sh@123 -- # set -e 00:15:49.410 18:24:47 -- nvmf/common.sh@124 -- # return 0 00:15:49.410 18:24:47 -- nvmf/common.sh@477 -- # '[' -n 81012 ']' 00:15:49.410 18:24:47 -- nvmf/common.sh@478 -- # killprocess 81012 00:15:49.410 18:24:47 -- common/autotest_common.sh@936 -- # '[' -z 81012 ']' 00:15:49.410 18:24:47 -- common/autotest_common.sh@940 -- # kill -0 81012 00:15:49.410 18:24:47 -- common/autotest_common.sh@941 -- # uname 00:15:49.410 18:24:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.410 18:24:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81012 00:15:49.410 killing process with pid 81012 00:15:49.410 18:24:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:49.410 18:24:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:49.410 18:24:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81012' 00:15:49.410 18:24:47 -- common/autotest_common.sh@955 -- # kill 81012 00:15:49.410 18:24:47 -- common/autotest_common.sh@960 -- # wait 81012 00:15:49.669 18:24:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:49.669 18:24:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:49.669 18:24:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:49.669 18:24:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.669 18:24:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:49.669 18:24:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.669 18:24:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.669 18:24:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.669 18:24:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:49.669 ************************************ 00:15:49.669 END TEST nvmf_fio_host 00:15:49.669 ************************************ 00:15:49.669 00:15:49.669 real 0m19.604s 00:15:49.669 user 1m25.958s 00:15:49.669 sys 0m4.296s 00:15:49.669 18:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:49.669 18:24:47 -- common/autotest_common.sh@10 -- # set +x 00:15:49.669 18:24:47 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:49.669 18:24:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:49.669 18:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:49.669 18:24:47 -- common/autotest_common.sh@10 -- # set +x 00:15:49.669 ************************************ 00:15:49.669 START TEST nvmf_failover 00:15:49.669 ************************************ 00:15:49.669 18:24:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:49.927 * Looking for test storage... 00:15:49.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:49.927 18:24:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:49.927 18:24:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:49.927 18:24:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:49.927 18:24:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:49.927 18:24:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:49.927 18:24:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:49.927 18:24:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:49.927 18:24:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:49.927 18:24:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:49.927 18:24:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.927 18:24:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:49.927 18:24:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:49.927 18:24:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:49.927 18:24:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:49.927 18:24:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:49.927 18:24:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:49.927 18:24:48 -- scripts/common.sh@344 -- # : 1 00:15:49.927 18:24:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:49.927 18:24:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.927 18:24:48 -- scripts/common.sh@364 -- # decimal 1 00:15:49.927 18:24:48 -- scripts/common.sh@352 -- # local d=1 00:15:49.927 18:24:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.927 18:24:48 -- scripts/common.sh@354 -- # echo 1 00:15:49.927 18:24:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:49.927 18:24:48 -- scripts/common.sh@365 -- # decimal 2 00:15:49.927 18:24:48 -- scripts/common.sh@352 -- # local d=2 00:15:49.927 18:24:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.927 18:24:48 -- scripts/common.sh@354 -- # echo 2 00:15:49.927 18:24:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:49.927 18:24:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:49.927 18:24:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:49.927 18:24:48 -- scripts/common.sh@367 -- # return 0 00:15:49.927 18:24:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.927 18:24:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.927 --rc genhtml_branch_coverage=1 00:15:49.927 --rc genhtml_function_coverage=1 00:15:49.927 --rc genhtml_legend=1 00:15:49.927 --rc geninfo_all_blocks=1 00:15:49.927 --rc geninfo_unexecuted_blocks=1 00:15:49.927 00:15:49.927 ' 00:15:49.927 18:24:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.927 --rc genhtml_branch_coverage=1 00:15:49.927 --rc genhtml_function_coverage=1 00:15:49.927 --rc genhtml_legend=1 00:15:49.927 --rc geninfo_all_blocks=1 00:15:49.927 --rc geninfo_unexecuted_blocks=1 00:15:49.927 00:15:49.927 ' 00:15:49.927 18:24:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.927 --rc genhtml_branch_coverage=1 00:15:49.927 --rc genhtml_function_coverage=1 00:15:49.927 --rc genhtml_legend=1 00:15:49.927 --rc geninfo_all_blocks=1 00:15:49.927 --rc geninfo_unexecuted_blocks=1 00:15:49.927 00:15:49.927 ' 00:15:49.927 18:24:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:49.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.927 --rc genhtml_branch_coverage=1 00:15:49.927 --rc genhtml_function_coverage=1 00:15:49.927 --rc genhtml_legend=1 00:15:49.927 --rc geninfo_all_blocks=1 00:15:49.927 --rc geninfo_unexecuted_blocks=1 00:15:49.927 00:15:49.927 ' 00:15:49.927 18:24:48 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.927 18:24:48 -- nvmf/common.sh@7 -- # uname -s 00:15:49.927 18:24:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.927 18:24:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.927 18:24:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.928 18:24:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.928 18:24:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.928 18:24:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.928 18:24:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.928 18:24:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.928 18:24:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.928 18:24:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.928 18:24:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:15:49.928 18:24:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:15:49.928 18:24:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.928 18:24:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.928 18:24:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.928 18:24:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.928 18:24:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.928 18:24:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.928 18:24:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.928 18:24:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.928 18:24:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.928 18:24:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.928 18:24:48 -- paths/export.sh@5 -- # export PATH 00:15:49.928 18:24:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.928 18:24:48 -- nvmf/common.sh@46 -- # : 0 00:15:49.928 18:24:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:49.928 18:24:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:49.928 18:24:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:49.928 18:24:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.928 18:24:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.928 18:24:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:49.928 18:24:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:49.928 18:24:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:49.928 18:24:48 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.928 18:24:48 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.928 18:24:48 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:49.928 18:24:48 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.928 18:24:48 -- host/failover.sh@18 -- # nvmftestinit 00:15:49.928 18:24:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:49.928 18:24:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.928 18:24:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:49.928 18:24:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:49.928 18:24:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:49.928 18:24:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.928 18:24:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.928 18:24:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.928 18:24:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:49.928 18:24:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:49.928 18:24:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:49.928 18:24:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:49.928 18:24:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:49.928 18:24:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:49.928 18:24:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.928 18:24:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.928 18:24:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:49.928 18:24:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:49.928 18:24:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.928 18:24:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.928 18:24:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.928 18:24:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.928 18:24:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.928 18:24:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.928 18:24:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.928 18:24:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.928 18:24:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:49.928 18:24:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:49.928 Cannot find device "nvmf_tgt_br" 00:15:49.928 18:24:48 -- nvmf/common.sh@154 -- # true 00:15:49.928 18:24:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.928 Cannot find device "nvmf_tgt_br2" 00:15:49.928 18:24:48 -- nvmf/common.sh@155 -- # true 00:15:49.928 18:24:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:49.928 18:24:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:49.928 Cannot find device "nvmf_tgt_br" 00:15:49.928 18:24:48 -- nvmf/common.sh@157 -- # true 00:15:49.928 18:24:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:49.928 Cannot find device "nvmf_tgt_br2" 00:15:49.928 18:24:48 -- nvmf/common.sh@158 -- # true 00:15:49.928 18:24:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:50.187 18:24:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:50.187 18:24:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.187 18:24:48 -- nvmf/common.sh@161 -- # true 00:15:50.187 18:24:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.187 18:24:48 -- nvmf/common.sh@162 -- # true 00:15:50.187 18:24:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.187 18:24:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:50.187 18:24:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:50.187 18:24:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:50.187 18:24:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:50.187 18:24:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:50.187 18:24:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:50.187 18:24:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.187 18:24:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:50.187 18:24:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:50.187 18:24:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:50.187 18:24:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:50.187 18:24:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:50.187 18:24:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.187 18:24:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.187 18:24:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.187 18:24:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:50.187 18:24:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:50.187 18:24:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.187 18:24:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.187 18:24:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.187 18:24:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.187 18:24:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:50.187 18:24:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:50.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:50.187 00:15:50.187 --- 10.0.0.2 ping statistics --- 00:15:50.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.187 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:50.187 18:24:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:50.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:50.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:15:50.187 00:15:50.187 --- 10.0.0.3 ping statistics --- 00:15:50.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.187 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:50.187 18:24:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:50.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:15:50.187 00:15:50.187 --- 10.0.0.1 ping statistics --- 00:15:50.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.187 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:15:50.187 18:24:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.187 18:24:48 -- nvmf/common.sh@421 -- # return 0 00:15:50.187 18:24:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:50.187 18:24:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.187 18:24:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:50.187 18:24:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:50.187 18:24:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.187 18:24:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:50.187 18:24:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:50.187 18:24:48 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:50.187 18:24:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:50.187 18:24:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.187 18:24:48 -- common/autotest_common.sh@10 -- # set +x 00:15:50.187 18:24:48 -- nvmf/common.sh@469 -- # nvmfpid=81570 00:15:50.187 18:24:48 -- nvmf/common.sh@470 -- # waitforlisten 81570 00:15:50.187 18:24:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:50.187 18:24:48 -- common/autotest_common.sh@829 -- # '[' -z 81570 ']' 00:15:50.187 18:24:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.187 18:24:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.187 18:24:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.187 18:24:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.187 18:24:48 -- common/autotest_common.sh@10 -- # set +x 00:15:50.445 [2024-11-17 18:24:48.461352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:50.445 [2024-11-17 18:24:48.461426] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.445 [2024-11-17 18:24:48.596235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.445 [2024-11-17 18:24:48.628462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:50.445 [2024-11-17 18:24:48.629055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.445 [2024-11-17 18:24:48.629342] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.445 [2024-11-17 18:24:48.629560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.445 [2024-11-17 18:24:48.629878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.445 [2024-11-17 18:24:48.629939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.445 [2024-11-17 18:24:48.629946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.446 18:24:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.446 18:24:48 -- common/autotest_common.sh@862 -- # return 0 00:15:50.446 18:24:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:50.446 18:24:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.446 18:24:48 -- common/autotest_common.sh@10 -- # set +x 00:15:50.703 18:24:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.703 18:24:48 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:50.703 [2024-11-17 18:24:48.957231] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.960 18:24:48 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:50.960 Malloc0 00:15:50.960 18:24:49 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.526 18:24:49 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.526 18:24:49 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.783 [2024-11-17 18:24:49.966213] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.783 18:24:49 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.041 [2024-11-17 18:24:50.250426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.041 18:24:50 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:52.299 [2024-11-17 18:24:50.490724] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:52.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.299 18:24:50 -- host/failover.sh@31 -- # bdevperf_pid=81620 00:15:52.299 18:24:50 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:52.299 18:24:50 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:52.299 18:24:50 -- host/failover.sh@34 -- # waitforlisten 81620 /var/tmp/bdevperf.sock 00:15:52.299 18:24:50 -- common/autotest_common.sh@829 -- # '[' -z 81620 ']' 00:15:52.299 18:24:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.299 18:24:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.299 18:24:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.300 18:24:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.300 18:24:50 -- common/autotest_common.sh@10 -- # set +x 00:15:53.235 18:24:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.235 18:24:51 -- common/autotest_common.sh@862 -- # return 0 00:15:53.235 18:24:51 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.802 NVMe0n1 00:15:53.802 18:24:51 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:54.069 00:15:54.069 18:24:52 -- host/failover.sh@39 -- # run_test_pid=81648 00:15:54.069 18:24:52 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:54.069 18:24:52 -- host/failover.sh@41 -- # sleep 1 00:15:55.004 18:24:53 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.262 [2024-11-17 18:24:53.347644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 [2024-11-17 18:24:53.347803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255c2b0 is same with the state(5) to be set 00:15:55.262 18:24:53 -- host/failover.sh@45 -- # sleep 3 00:15:58.545 18:24:56 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:58.545 00:15:58.545 18:24:56 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:58.804 [2024-11-17 18:24:56.973754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 [2024-11-17 18:24:56.973949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a86b0 is same with the state(5) to be set 00:15:58.804 18:24:56 -- host/failover.sh@50 -- # sleep 3 00:16:02.095 18:25:00 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.095 [2024-11-17 18:25:00.226449] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.095 18:25:00 -- host/failover.sh@55 -- # sleep 1 00:16:03.032 18:25:01 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:03.289 [2024-11-17 18:25:01.512720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254fb20 is same with the state(5) to be set 00:16:03.289 [2024-11-17 18:25:01.512783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254fb20 is same with the state(5) to be set 00:16:03.289 [2024-11-17 18:25:01.512818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254fb20 is same with the state(5) to be set 00:16:03.289 [2024-11-17 18:25:01.512830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254fb20 is same with the state(5) to be set 00:16:03.289 [2024-11-17 18:25:01.512842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254fb20 is same with the state(5) to be set 00:16:03.289 [2024-11-17 18:25:01.512854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254fb20 is same with the state(5) to be set 00:16:03.289 18:25:01 -- host/failover.sh@59 -- # wait 81648 00:16:09.856 0 00:16:09.856 18:25:07 -- host/failover.sh@61 -- # killprocess 81620 00:16:09.856 18:25:07 -- common/autotest_common.sh@936 -- # '[' -z 81620 ']' 00:16:09.856 18:25:07 -- common/autotest_common.sh@940 -- # kill -0 81620 00:16:09.856 18:25:07 -- common/autotest_common.sh@941 -- # uname 00:16:09.856 18:25:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.856 18:25:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81620 00:16:09.856 18:25:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:09.856 18:25:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:09.856 killing process with pid 81620 00:16:09.856 18:25:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81620' 00:16:09.856 18:25:07 -- common/autotest_common.sh@955 -- # kill 81620 00:16:09.856 18:25:07 -- common/autotest_common.sh@960 -- # wait 81620 00:16:09.856 18:25:07 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:09.856 [2024-11-17 18:24:50.550504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:09.856 [2024-11-17 18:24:50.550615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81620 ] 00:16:09.856 [2024-11-17 18:24:50.687479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.856 [2024-11-17 18:24:50.726887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.856 Running I/O for 15 seconds... 00:16:09.856 [2024-11-17 18:24:53.348766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.856 [2024-11-17 18:24:53.349035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.856 [2024-11-17 18:24:53.349143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.856 [2024-11-17 18:24:53.349230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.856 [2024-11-17 18:24:53.349318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.856 [2024-11-17 18:24:53.349432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.856 [2024-11-17 18:24:53.349506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.349582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.349687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.349763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.349850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.349927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.350966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.350982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.350995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.351097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.351154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.351182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.351310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.351675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.857 [2024-11-17 18:24:53.351715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.857 [2024-11-17 18:24:53.351848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.857 [2024-11-17 18:24:53.351864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.351878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.351893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.351908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.351924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.351938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.351953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.351967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.351982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.351997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.352915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.352974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.352989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.353003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.353018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.858 [2024-11-17 18:24:53.353033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.858 [2024-11-17 18:24:53.353048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.858 [2024-11-17 18:24:53.353063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.353903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.353977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.353991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.354059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.354089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.354118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.859 [2024-11-17 18:24:53.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.354177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.354206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.859 [2024-11-17 18:24:53.354222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.859 [2024-11-17 18:24:53.354235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.860 [2024-11-17 18:24:53.354264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:53.354499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ba40 is same with the state(5) to be set 00:16:09.860 [2024-11-17 18:24:53.354558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:09.860 [2024-11-17 18:24:53.354570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:09.860 [2024-11-17 18:24:53.354584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123656 len:8 PRP1 0x0 PRP2 0x0 00:16:09.860 [2024-11-17 18:24:53.354599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354657] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151ba40 was disconnected and freed. reset controller. 00:16:09.860 [2024-11-17 18:24:53.354678] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:09.860 [2024-11-17 18:24:53.354745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.860 [2024-11-17 18:24:53.354768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.860 [2024-11-17 18:24:53.354797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.860 [2024-11-17 18:24:53.354854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.860 [2024-11-17 18:24:53.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:53.354893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:09.860 [2024-11-17 18:24:53.354933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e7d40 (9): Bad file descriptor 00:16:09.860 [2024-11-17 18:24:53.357403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:09.860 [2024-11-17 18:24:53.384706] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.860 [2024-11-17 18:24:56.974091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.860 [2024-11-17 18:24:56.974901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.860 [2024-11-17 18:24:56.974928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.860 [2024-11-17 18:24:56.974942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.860 [2024-11-17 18:24:56.974955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.974968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.974995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.861 [2024-11-17 18:24:56.975547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.861 [2024-11-17 18:24:56.975886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.861 [2024-11-17 18:24:56.975899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.975920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.975933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.975947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.975960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.975975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.975987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.976202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.976400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.976491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.976552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.976876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.976984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.976997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.977010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.977024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.977037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.977051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.977063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.977077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.862 [2024-11-17 18:24:56.977095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.977121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.862 [2024-11-17 18:24:56.977135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.862 [2024-11-17 18:24:56.977149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.977188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.977215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.977729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.977800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.977829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.977912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.977974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.977989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.863 [2024-11-17 18:24:56.978029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.863 [2024-11-17 18:24:56.978280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506c00 is same with the state(5) to be set 00:16:09.863 [2024-11-17 18:24:56.978337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:09.863 [2024-11-17 18:24:56.978359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:09.863 [2024-11-17 18:24:56.978373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:16:09.863 [2024-11-17 18:24:56.978387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.863 [2024-11-17 18:24:56.978434] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1506c00 was disconnected and freed. reset controller. 00:16:09.863 [2024-11-17 18:24:56.978459] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:09.863 [2024-11-17 18:24:56.978514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.864 [2024-11-17 18:24:56.978547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:24:56.978564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.864 [2024-11-17 18:24:56.978577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:24:56.978592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.864 [2024-11-17 18:24:56.978606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:24:56.978620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.864 [2024-11-17 18:24:56.978639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:24:56.978653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:09.864 [2024-11-17 18:24:56.981088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:09.864 [2024-11-17 18:24:56.981125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e7d40 (9): Bad file descriptor 00:16:09.864 [2024-11-17 18:24:57.012607] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.864 [2024-11-17 18:25:01.512923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.512978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.864 [2024-11-17 18:25:01.513928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.513956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.513995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.514010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.514026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.514040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.514056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.514070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.864 [2024-11-17 18:25:01.514085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.864 [2024-11-17 18:25:01.514099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.865 [2024-11-17 18:25:01.514962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.514978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.514992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.865 [2024-11-17 18:25:01.515221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.865 [2024-11-17 18:25:01.515235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.515981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.515996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:09.866 [2024-11-17 18:25:01.516912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.516970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.516986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.866 [2024-11-17 18:25:01.517000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.866 [2024-11-17 18:25:01.517015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.867 [2024-11-17 18:25:01.517029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.867 [2024-11-17 18:25:01.517058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.867 [2024-11-17 18:25:01.517087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:09.867 [2024-11-17 18:25:01.517116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ea970 is same with the state(5) to be set 00:16:09.867 [2024-11-17 18:25:01.517148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:09.867 [2024-11-17 18:25:01.517159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:09.867 [2024-11-17 18:25:01.517169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112472 len:8 PRP1 0x0 PRP2 0x0 00:16:09.867 [2024-11-17 18:25:01.517183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517245] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14ea970 was disconnected and freed. reset controller. 00:16:09.867 [2024-11-17 18:25:01.517264] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:09.867 [2024-11-17 18:25:01.517331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.867 [2024-11-17 18:25:01.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.867 [2024-11-17 18:25:01.517397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.867 [2024-11-17 18:25:01.517425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.867 [2024-11-17 18:25:01.517456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.867 [2024-11-17 18:25:01.517471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:09.867 [2024-11-17 18:25:01.517518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e7d40 (9): Bad file descriptor 00:16:09.867 [2024-11-17 18:25:01.519925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:09.867 [2024-11-17 18:25:01.555222] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.867 00:16:09.867 Latency(us) 00:16:09.867 [2024-11-17T18:25:08.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.867 [2024-11-17T18:25:08.134Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:09.867 Verification LBA range: start 0x0 length 0x4000 00:16:09.867 NVMe0n1 : 15.01 13497.73 52.73 305.78 0.00 9254.65 547.37 16324.42 00:16:09.867 [2024-11-17T18:25:08.134Z] =================================================================================================================== 00:16:09.867 [2024-11-17T18:25:08.134Z] Total : 13497.73 52.73 305.78 0.00 9254.65 547.37 16324.42 00:16:09.867 Received shutdown signal, test time was about 15.000000 seconds 00:16:09.867 00:16:09.867 Latency(us) 00:16:09.867 [2024-11-17T18:25:08.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.867 [2024-11-17T18:25:08.134Z] =================================================================================================================== 00:16:09.867 [2024-11-17T18:25:08.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:09.867 18:25:07 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:09.867 18:25:07 -- host/failover.sh@65 -- # count=3 00:16:09.867 18:25:07 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:09.867 18:25:07 -- host/failover.sh@73 -- # bdevperf_pid=81822 00:16:09.867 18:25:07 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:09.867 18:25:07 -- host/failover.sh@75 -- # waitforlisten 81822 /var/tmp/bdevperf.sock 00:16:09.867 18:25:07 -- common/autotest_common.sh@829 -- # '[' -z 81822 ']' 00:16:09.867 18:25:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.867 18:25:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.867 18:25:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.867 18:25:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.867 18:25:07 -- common/autotest_common.sh@10 -- # set +x 00:16:10.434 18:25:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.434 18:25:08 -- common/autotest_common.sh@862 -- # return 0 00:16:10.434 18:25:08 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:10.694 [2024-11-17 18:25:08.758883] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:10.694 18:25:08 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:10.953 [2024-11-17 18:25:08.995121] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:10.953 18:25:09 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:11.211 NVMe0n1 00:16:11.211 18:25:09 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:11.470 00:16:11.470 18:25:09 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:11.728 00:16:11.728 18:25:09 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.728 18:25:09 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:12.294 18:25:10 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.295 18:25:10 -- host/failover.sh@87 -- # sleep 3 00:16:15.581 18:25:13 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:15.581 18:25:13 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:15.581 18:25:13 -- host/failover.sh@90 -- # run_test_pid=81904 00:16:15.581 18:25:13 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:15.581 18:25:13 -- host/failover.sh@92 -- # wait 81904 00:16:16.965 0 00:16:16.965 18:25:14 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:16.965 [2024-11-17 18:25:07.479604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:16.965 [2024-11-17 18:25:07.479740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81822 ] 00:16:16.965 [2024-11-17 18:25:07.619382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.965 [2024-11-17 18:25:07.651658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.965 [2024-11-17 18:25:10.505737] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:16.965 [2024-11-17 18:25:10.505870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.965 [2024-11-17 18:25:10.505896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.965 [2024-11-17 18:25:10.505913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.965 [2024-11-17 18:25:10.505927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.965 [2024-11-17 18:25:10.505941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.965 [2024-11-17 18:25:10.505953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.965 [2024-11-17 18:25:10.505966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.965 [2024-11-17 18:25:10.505979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.965 [2024-11-17 18:25:10.505992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:16.965 [2024-11-17 18:25:10.506041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:16.965 [2024-11-17 18:25:10.506072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e27d40 (9): Bad file descriptor 00:16:16.965 [2024-11-17 18:25:10.512886] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:16.965 Running I/O for 1 seconds... 00:16:16.965 00:16:16.965 Latency(us) 00:16:16.965 [2024-11-17T18:25:15.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.965 [2024-11-17T18:25:15.232Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.965 Verification LBA range: start 0x0 length 0x4000 00:16:16.965 NVMe0n1 : 1.01 13336.06 52.09 0.00 0.00 9546.77 919.74 11498.59 00:16:16.965 [2024-11-17T18:25:15.232Z] =================================================================================================================== 00:16:16.965 [2024-11-17T18:25:15.232Z] Total : 13336.06 52.09 0.00 0.00 9546.77 919.74 11498.59 00:16:16.965 18:25:14 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:16.965 18:25:14 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:17.238 18:25:15 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:17.496 18:25:15 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:17.496 18:25:15 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:17.496 18:25:15 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:17.755 18:25:16 -- host/failover.sh@101 -- # sleep 3 00:16:21.040 18:25:19 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:21.040 18:25:19 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:21.040 18:25:19 -- host/failover.sh@108 -- # killprocess 81822 00:16:21.040 18:25:19 -- common/autotest_common.sh@936 -- # '[' -z 81822 ']' 00:16:21.040 18:25:19 -- common/autotest_common.sh@940 -- # kill -0 81822 00:16:21.040 18:25:19 -- common/autotest_common.sh@941 -- # uname 00:16:21.040 18:25:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.040 18:25:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81822 00:16:21.299 killing process with pid 81822 00:16:21.299 18:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:21.299 18:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:21.299 18:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81822' 00:16:21.299 18:25:19 -- common/autotest_common.sh@955 -- # kill 81822 00:16:21.299 18:25:19 -- common/autotest_common.sh@960 -- # wait 81822 00:16:21.299 18:25:19 -- host/failover.sh@110 -- # sync 00:16:21.299 18:25:19 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.557 18:25:19 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:21.557 18:25:19 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:21.557 18:25:19 -- host/failover.sh@116 -- # nvmftestfini 00:16:21.557 18:25:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:21.557 18:25:19 -- nvmf/common.sh@116 -- # sync 00:16:21.557 18:25:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:21.557 18:25:19 -- nvmf/common.sh@119 -- # set +e 00:16:21.557 18:25:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:21.557 18:25:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:21.557 rmmod nvme_tcp 00:16:21.557 rmmod nvme_fabrics 00:16:21.557 rmmod nvme_keyring 00:16:21.557 18:25:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:21.558 18:25:19 -- nvmf/common.sh@123 -- # set -e 00:16:21.558 18:25:19 -- nvmf/common.sh@124 -- # return 0 00:16:21.558 18:25:19 -- nvmf/common.sh@477 -- # '[' -n 81570 ']' 00:16:21.558 18:25:19 -- nvmf/common.sh@478 -- # killprocess 81570 00:16:21.558 18:25:19 -- common/autotest_common.sh@936 -- # '[' -z 81570 ']' 00:16:21.558 18:25:19 -- common/autotest_common.sh@940 -- # kill -0 81570 00:16:21.558 18:25:19 -- common/autotest_common.sh@941 -- # uname 00:16:21.558 18:25:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.558 18:25:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81570 00:16:21.817 killing process with pid 81570 00:16:21.817 18:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:21.817 18:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:21.817 18:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81570' 00:16:21.817 18:25:19 -- common/autotest_common.sh@955 -- # kill 81570 00:16:21.817 18:25:19 -- common/autotest_common.sh@960 -- # wait 81570 00:16:21.817 18:25:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:21.817 18:25:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:21.817 18:25:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:21.817 18:25:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.817 18:25:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:21.817 18:25:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.817 18:25:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.817 18:25:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.817 18:25:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:21.817 00:16:21.817 real 0m32.132s 00:16:21.817 user 2m5.436s 00:16:21.817 sys 0m5.401s 00:16:21.817 18:25:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:21.817 ************************************ 00:16:21.817 18:25:20 -- common/autotest_common.sh@10 -- # set +x 00:16:21.817 END TEST nvmf_failover 00:16:21.817 ************************************ 00:16:21.817 18:25:20 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:21.817 18:25:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:21.817 18:25:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.817 18:25:20 -- common/autotest_common.sh@10 -- # set +x 00:16:21.817 ************************************ 00:16:21.817 START TEST nvmf_discovery 00:16:21.817 ************************************ 00:16:21.817 18:25:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:22.076 * Looking for test storage... 00:16:22.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:22.076 18:25:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:22.076 18:25:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:22.076 18:25:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:22.076 18:25:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:22.076 18:25:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:22.076 18:25:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:22.076 18:25:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:22.076 18:25:20 -- scripts/common.sh@335 -- # IFS=.-: 00:16:22.076 18:25:20 -- scripts/common.sh@335 -- # read -ra ver1 00:16:22.076 18:25:20 -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.076 18:25:20 -- scripts/common.sh@336 -- # read -ra ver2 00:16:22.076 18:25:20 -- scripts/common.sh@337 -- # local 'op=<' 00:16:22.076 18:25:20 -- scripts/common.sh@339 -- # ver1_l=2 00:16:22.076 18:25:20 -- scripts/common.sh@340 -- # ver2_l=1 00:16:22.076 18:25:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:22.076 18:25:20 -- scripts/common.sh@343 -- # case "$op" in 00:16:22.076 18:25:20 -- scripts/common.sh@344 -- # : 1 00:16:22.076 18:25:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:22.076 18:25:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.076 18:25:20 -- scripts/common.sh@364 -- # decimal 1 00:16:22.076 18:25:20 -- scripts/common.sh@352 -- # local d=1 00:16:22.076 18:25:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.076 18:25:20 -- scripts/common.sh@354 -- # echo 1 00:16:22.076 18:25:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:22.076 18:25:20 -- scripts/common.sh@365 -- # decimal 2 00:16:22.076 18:25:20 -- scripts/common.sh@352 -- # local d=2 00:16:22.076 18:25:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.076 18:25:20 -- scripts/common.sh@354 -- # echo 2 00:16:22.076 18:25:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:22.076 18:25:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:22.076 18:25:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:22.076 18:25:20 -- scripts/common.sh@367 -- # return 0 00:16:22.076 18:25:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.076 18:25:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:22.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.077 --rc genhtml_branch_coverage=1 00:16:22.077 --rc genhtml_function_coverage=1 00:16:22.077 --rc genhtml_legend=1 00:16:22.077 --rc geninfo_all_blocks=1 00:16:22.077 --rc geninfo_unexecuted_blocks=1 00:16:22.077 00:16:22.077 ' 00:16:22.077 18:25:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:22.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.077 --rc genhtml_branch_coverage=1 00:16:22.077 --rc genhtml_function_coverage=1 00:16:22.077 --rc genhtml_legend=1 00:16:22.077 --rc geninfo_all_blocks=1 00:16:22.077 --rc geninfo_unexecuted_blocks=1 00:16:22.077 00:16:22.077 ' 00:16:22.077 18:25:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:22.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.077 --rc genhtml_branch_coverage=1 00:16:22.077 --rc genhtml_function_coverage=1 00:16:22.077 --rc genhtml_legend=1 00:16:22.077 --rc geninfo_all_blocks=1 00:16:22.077 --rc geninfo_unexecuted_blocks=1 00:16:22.077 00:16:22.077 ' 00:16:22.077 18:25:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:22.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.077 --rc genhtml_branch_coverage=1 00:16:22.077 --rc genhtml_function_coverage=1 00:16:22.077 --rc genhtml_legend=1 00:16:22.077 --rc geninfo_all_blocks=1 00:16:22.077 --rc geninfo_unexecuted_blocks=1 00:16:22.077 00:16:22.077 ' 00:16:22.077 18:25:20 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.077 18:25:20 -- nvmf/common.sh@7 -- # uname -s 00:16:22.077 18:25:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.077 18:25:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.077 18:25:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.077 18:25:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.077 18:25:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.077 18:25:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.077 18:25:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.077 18:25:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.077 18:25:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.077 18:25:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.077 18:25:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:16:22.077 18:25:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:16:22.077 18:25:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.077 18:25:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.077 18:25:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.077 18:25:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.077 18:25:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.077 18:25:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.077 18:25:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.077 18:25:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.077 18:25:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.077 18:25:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.077 18:25:20 -- paths/export.sh@5 -- # export PATH 00:16:22.077 18:25:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.077 18:25:20 -- nvmf/common.sh@46 -- # : 0 00:16:22.077 18:25:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:22.077 18:25:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:22.077 18:25:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:22.077 18:25:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.077 18:25:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.077 18:25:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:22.077 18:25:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:22.077 18:25:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:22.077 18:25:20 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:22.077 18:25:20 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:22.077 18:25:20 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:22.077 18:25:20 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:22.077 18:25:20 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:22.077 18:25:20 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:22.077 18:25:20 -- host/discovery.sh@25 -- # nvmftestinit 00:16:22.077 18:25:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:22.077 18:25:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.077 18:25:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:22.077 18:25:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:22.077 18:25:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:22.077 18:25:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.077 18:25:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.077 18:25:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.077 18:25:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:22.077 18:25:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:22.077 18:25:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:22.077 18:25:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:22.077 18:25:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:22.077 18:25:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:22.077 18:25:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.077 18:25:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.077 18:25:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.077 18:25:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:22.077 18:25:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.077 18:25:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.077 18:25:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.077 18:25:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.077 18:25:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.077 18:25:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.077 18:25:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.077 18:25:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.077 18:25:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:22.077 18:25:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:22.077 Cannot find device "nvmf_tgt_br" 00:16:22.077 18:25:20 -- nvmf/common.sh@154 -- # true 00:16:22.077 18:25:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.077 Cannot find device "nvmf_tgt_br2" 00:16:22.077 18:25:20 -- nvmf/common.sh@155 -- # true 00:16:22.077 18:25:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:22.077 18:25:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:22.077 Cannot find device "nvmf_tgt_br" 00:16:22.077 18:25:20 -- nvmf/common.sh@157 -- # true 00:16:22.077 18:25:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:22.336 Cannot find device "nvmf_tgt_br2" 00:16:22.336 18:25:20 -- nvmf/common.sh@158 -- # true 00:16:22.336 18:25:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:22.336 18:25:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:22.336 18:25:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.336 18:25:20 -- nvmf/common.sh@161 -- # true 00:16:22.336 18:25:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.336 18:25:20 -- nvmf/common.sh@162 -- # true 00:16:22.336 18:25:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.336 18:25:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.336 18:25:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.336 18:25:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.336 18:25:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.336 18:25:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.336 18:25:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.336 18:25:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.336 18:25:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.336 18:25:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:22.336 18:25:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:22.336 18:25:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:22.336 18:25:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:22.336 18:25:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.336 18:25:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.336 18:25:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.336 18:25:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:22.336 18:25:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:22.336 18:25:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.336 18:25:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.336 18:25:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.336 18:25:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.336 18:25:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.336 18:25:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:22.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:22.336 00:16:22.336 --- 10.0.0.2 ping statistics --- 00:16:22.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.336 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:22.336 18:25:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:22.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:22.336 00:16:22.336 --- 10.0.0.3 ping statistics --- 00:16:22.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.336 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:22.336 18:25:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:22.595 00:16:22.595 --- 10.0.0.1 ping statistics --- 00:16:22.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.595 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:22.595 18:25:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.595 18:25:20 -- nvmf/common.sh@421 -- # return 0 00:16:22.595 18:25:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:22.595 18:25:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.595 18:25:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:22.595 18:25:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:22.595 18:25:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.595 18:25:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:22.595 18:25:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:22.595 18:25:20 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:22.595 18:25:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.595 18:25:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.595 18:25:20 -- common/autotest_common.sh@10 -- # set +x 00:16:22.595 18:25:20 -- nvmf/common.sh@469 -- # nvmfpid=82186 00:16:22.595 18:25:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.595 18:25:20 -- nvmf/common.sh@470 -- # waitforlisten 82186 00:16:22.595 18:25:20 -- common/autotest_common.sh@829 -- # '[' -z 82186 ']' 00:16:22.595 18:25:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.595 18:25:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.595 18:25:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.595 18:25:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.595 18:25:20 -- common/autotest_common.sh@10 -- # set +x 00:16:22.595 [2024-11-17 18:25:20.708613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:22.595 [2024-11-17 18:25:20.708737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.595 [2024-11-17 18:25:20.850242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.854 [2024-11-17 18:25:20.890196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:22.854 [2024-11-17 18:25:20.890385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.854 [2024-11-17 18:25:20.890402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.854 [2024-11-17 18:25:20.890413] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.854 [2024-11-17 18:25:20.890447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.421 18:25:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.421 18:25:21 -- common/autotest_common.sh@862 -- # return 0 00:16:23.421 18:25:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.421 18:25:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.421 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 18:25:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.680 18:25:21 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.680 18:25:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.680 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 [2024-11-17 18:25:21.724116] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.680 18:25:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.680 18:25:21 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:23.680 18:25:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.680 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 [2024-11-17 18:25:21.732275] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:23.680 18:25:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.680 18:25:21 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:23.680 18:25:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.680 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 null0 00:16:23.680 18:25:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.680 18:25:21 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:23.680 18:25:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.680 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 null1 00:16:23.680 18:25:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.680 18:25:21 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:23.680 18:25:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.680 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 18:25:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.680 18:25:21 -- host/discovery.sh@45 -- # hostpid=82218 00:16:23.680 18:25:21 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:23.680 18:25:21 -- host/discovery.sh@46 -- # waitforlisten 82218 /tmp/host.sock 00:16:23.680 18:25:21 -- common/autotest_common.sh@829 -- # '[' -z 82218 ']' 00:16:23.680 18:25:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:23.680 18:25:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.680 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:23.680 18:25:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:23.680 18:25:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.680 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.680 [2024-11-17 18:25:21.802676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:23.680 [2024-11-17 18:25:21.802757] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82218 ] 00:16:23.680 [2024-11-17 18:25:21.937575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.939 [2024-11-17 18:25:21.979151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.939 [2024-11-17 18:25:21.979378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.939 18:25:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.939 18:25:22 -- common/autotest_common.sh@862 -- # return 0 00:16:23.939 18:25:22 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:23.939 18:25:22 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:23.939 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.939 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.939 18:25:22 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:23.939 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.939 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.939 18:25:22 -- host/discovery.sh@72 -- # notify_id=0 00:16:23.939 18:25:22 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # sort 00:16:23.939 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # xargs 00:16:23.939 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.939 18:25:22 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:23.939 18:25:22 -- host/discovery.sh@79 -- # get_bdev_list 00:16:23.939 18:25:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.939 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.939 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 18:25:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.939 18:25:22 -- host/discovery.sh@55 -- # sort 00:16:23.939 18:25:22 -- host/discovery.sh@55 -- # xargs 00:16:23.939 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.939 18:25:22 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:23.939 18:25:22 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:23.939 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.939 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.939 18:25:22 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:23.939 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.939 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # sort 00:16:23.939 18:25:22 -- host/discovery.sh@59 -- # xargs 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.198 18:25:22 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:24.198 18:25:22 -- host/discovery.sh@83 -- # get_bdev_list 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.198 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.198 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # sort 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # xargs 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.198 18:25:22 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:24.198 18:25:22 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:24.198 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.198 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.198 18:25:22 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:24.198 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.198 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # sort 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # xargs 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.198 18:25:22 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:24.198 18:25:22 -- host/discovery.sh@87 -- # get_bdev_list 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # sort 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.198 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.198 18:25:22 -- host/discovery.sh@55 -- # xargs 00:16:24.198 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.198 18:25:22 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:24.198 18:25:22 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:24.198 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.198 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.198 [2024-11-17 18:25:22.420489] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.198 18:25:22 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:24.198 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.198 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # sort 00:16:24.198 18:25:22 -- host/discovery.sh@59 -- # xargs 00:16:24.198 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.457 18:25:22 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:24.457 18:25:22 -- host/discovery.sh@93 -- # get_bdev_list 00:16:24.457 18:25:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.457 18:25:22 -- host/discovery.sh@55 -- # sort 00:16:24.457 18:25:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.457 18:25:22 -- host/discovery.sh@55 -- # xargs 00:16:24.457 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.457 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.457 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.457 18:25:22 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:24.457 18:25:22 -- host/discovery.sh@94 -- # get_notification_count 00:16:24.457 18:25:22 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.457 18:25:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:24.457 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.457 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.457 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.457 18:25:22 -- host/discovery.sh@74 -- # notification_count=0 00:16:24.457 18:25:22 -- host/discovery.sh@75 -- # notify_id=0 00:16:24.457 18:25:22 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:24.457 18:25:22 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:24.457 18:25:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.457 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:24.457 18:25:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.457 18:25:22 -- host/discovery.sh@100 -- # sleep 1 00:16:25.024 [2024-11-17 18:25:23.086942] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:25.024 [2024-11-17 18:25:23.086993] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:25.024 [2024-11-17 18:25:23.087014] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:25.024 [2024-11-17 18:25:23.092982] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:25.024 [2024-11-17 18:25:23.148679] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:25.024 [2024-11-17 18:25:23.148737] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:25.591 18:25:23 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:25.591 18:25:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:25.591 18:25:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:25.591 18:25:23 -- host/discovery.sh@59 -- # xargs 00:16:25.591 18:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.591 18:25:23 -- host/discovery.sh@59 -- # sort 00:16:25.591 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:25.591 18:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@102 -- # get_bdev_list 00:16:25.591 18:25:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.591 18:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.591 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:25.591 18:25:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:25.591 18:25:23 -- host/discovery.sh@55 -- # sort 00:16:25.591 18:25:23 -- host/discovery.sh@55 -- # xargs 00:16:25.591 18:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:25.591 18:25:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:25.591 18:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.591 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:25.591 18:25:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:25.591 18:25:23 -- host/discovery.sh@63 -- # xargs 00:16:25.591 18:25:23 -- host/discovery.sh@63 -- # sort -n 00:16:25.591 18:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@104 -- # get_notification_count 00:16:25.591 18:25:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:25.591 18:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.591 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:25.591 18:25:23 -- host/discovery.sh@74 -- # jq '. | length' 00:16:25.591 18:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@74 -- # notification_count=1 00:16:25.591 18:25:23 -- host/discovery.sh@75 -- # notify_id=1 00:16:25.591 18:25:23 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:25.591 18:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.591 18:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:25.591 18:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.591 18:25:23 -- host/discovery.sh@109 -- # sleep 1 00:16:26.526 18:25:24 -- host/discovery.sh@110 -- # get_bdev_list 00:16:26.786 18:25:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.786 18:25:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:26.786 18:25:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.786 18:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:26.786 18:25:24 -- host/discovery.sh@55 -- # sort 00:16:26.786 18:25:24 -- host/discovery.sh@55 -- # xargs 00:16:26.786 18:25:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.786 18:25:24 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:26.786 18:25:24 -- host/discovery.sh@111 -- # get_notification_count 00:16:26.786 18:25:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:26.786 18:25:24 -- host/discovery.sh@74 -- # jq '. | length' 00:16:26.786 18:25:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.786 18:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:26.786 18:25:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.786 18:25:24 -- host/discovery.sh@74 -- # notification_count=1 00:16:26.786 18:25:24 -- host/discovery.sh@75 -- # notify_id=2 00:16:26.786 18:25:24 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:26.786 18:25:24 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:26.786 18:25:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.786 18:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:26.786 [2024-11-17 18:25:24.903219] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:26.786 [2024-11-17 18:25:24.904305] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:26.786 [2024-11-17 18:25:24.904387] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:26.786 18:25:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.786 18:25:24 -- host/discovery.sh@117 -- # sleep 1 00:16:26.786 [2024-11-17 18:25:24.910271] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:26.786 [2024-11-17 18:25:24.970631] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:26.786 [2024-11-17 18:25:24.970675] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:26.786 [2024-11-17 18:25:24.970683] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:27.724 18:25:25 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:27.724 18:25:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.724 18:25:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.724 18:25:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.724 18:25:25 -- common/autotest_common.sh@10 -- # set +x 00:16:27.724 18:25:25 -- host/discovery.sh@59 -- # sort 00:16:27.724 18:25:25 -- host/discovery.sh@59 -- # xargs 00:16:27.724 18:25:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.724 18:25:25 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.724 18:25:25 -- host/discovery.sh@119 -- # get_bdev_list 00:16:27.724 18:25:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.724 18:25:25 -- host/discovery.sh@55 -- # sort 00:16:27.724 18:25:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.724 18:25:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.724 18:25:25 -- common/autotest_common.sh@10 -- # set +x 00:16:27.724 18:25:25 -- host/discovery.sh@55 -- # xargs 00:16:27.983 18:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:27.983 18:25:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:27.983 18:25:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:27.983 18:25:26 -- host/discovery.sh@63 -- # xargs 00:16:27.983 18:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.983 18:25:26 -- common/autotest_common.sh@10 -- # set +x 00:16:27.983 18:25:26 -- host/discovery.sh@63 -- # sort -n 00:16:27.983 18:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@121 -- # get_notification_count 00:16:27.983 18:25:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:27.983 18:25:26 -- host/discovery.sh@74 -- # jq '. | length' 00:16:27.983 18:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.983 18:25:26 -- common/autotest_common.sh@10 -- # set +x 00:16:27.983 18:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@74 -- # notification_count=0 00:16:27.983 18:25:26 -- host/discovery.sh@75 -- # notify_id=2 00:16:27.983 18:25:26 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:27.983 18:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.983 18:25:26 -- common/autotest_common.sh@10 -- # set +x 00:16:27.983 [2024-11-17 18:25:26.133937] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:27.983 [2024-11-17 18:25:26.133989] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:27.983 [2024-11-17 18:25:26.136646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.983 [2024-11-17 18:25:26.136733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.983 [2024-11-17 18:25:26.136763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.983 [2024-11-17 18:25:26.136772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.983 [2024-11-17 18:25:26.136781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.983 [2024-11-17 18:25:26.136790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.983 [2024-11-17 18:25:26.136799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.983 [2024-11-17 18:25:26.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.983 [2024-11-17 18:25:26.136816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f150 is same with the state(5) to be set 00:16:27.983 18:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.983 18:25:26 -- host/discovery.sh@127 -- # sleep 1 00:16:27.984 [2024-11-17 18:25:26.139948] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:27.984 [2024-11-17 18:25:26.139997] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:27.984 [2024-11-17 18:25:26.140054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f150 (9): Bad file descriptor 00:16:28.919 18:25:27 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:28.919 18:25:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:28.919 18:25:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:28.919 18:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.919 18:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:28.919 18:25:27 -- host/discovery.sh@59 -- # sort 00:16:28.919 18:25:27 -- host/discovery.sh@59 -- # xargs 00:16:28.919 18:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.177 18:25:27 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.177 18:25:27 -- host/discovery.sh@129 -- # get_bdev_list 00:16:29.177 18:25:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.177 18:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.177 18:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:29.177 18:25:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.177 18:25:27 -- host/discovery.sh@55 -- # sort 00:16:29.177 18:25:27 -- host/discovery.sh@55 -- # xargs 00:16:29.177 18:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.177 18:25:27 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:29.177 18:25:27 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:29.177 18:25:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:29.177 18:25:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:29.177 18:25:27 -- host/discovery.sh@63 -- # sort -n 00:16:29.177 18:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.177 18:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:29.177 18:25:27 -- host/discovery.sh@63 -- # xargs 00:16:29.177 18:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.177 18:25:27 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:29.177 18:25:27 -- host/discovery.sh@131 -- # get_notification_count 00:16:29.178 18:25:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:29.178 18:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.178 18:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:29.178 18:25:27 -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.178 18:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.178 18:25:27 -- host/discovery.sh@74 -- # notification_count=0 00:16:29.178 18:25:27 -- host/discovery.sh@75 -- # notify_id=2 00:16:29.178 18:25:27 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:29.178 18:25:27 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:29.178 18:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.178 18:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:29.178 18:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.178 18:25:27 -- host/discovery.sh@135 -- # sleep 1 00:16:30.551 18:25:28 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:30.551 18:25:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:30.551 18:25:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:30.551 18:25:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.551 18:25:28 -- host/discovery.sh@59 -- # sort 00:16:30.551 18:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:30.551 18:25:28 -- host/discovery.sh@59 -- # xargs 00:16:30.551 18:25:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.551 18:25:28 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:30.551 18:25:28 -- host/discovery.sh@137 -- # get_bdev_list 00:16:30.551 18:25:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.551 18:25:28 -- host/discovery.sh@55 -- # xargs 00:16:30.551 18:25:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.551 18:25:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.551 18:25:28 -- host/discovery.sh@55 -- # sort 00:16:30.551 18:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:30.551 18:25:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.551 18:25:28 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:30.551 18:25:28 -- host/discovery.sh@138 -- # get_notification_count 00:16:30.551 18:25:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:30.551 18:25:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.551 18:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:30.551 18:25:28 -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.551 18:25:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.551 18:25:28 -- host/discovery.sh@74 -- # notification_count=2 00:16:30.551 18:25:28 -- host/discovery.sh@75 -- # notify_id=4 00:16:30.551 18:25:28 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:30.551 18:25:28 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:30.551 18:25:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.551 18:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:31.602 [2024-11-17 18:25:29.549670] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:31.602 [2024-11-17 18:25:29.549717] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:31.602 [2024-11-17 18:25:29.549751] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.602 [2024-11-17 18:25:29.555701] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:31.602 [2024-11-17 18:25:29.615091] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:31.602 [2024-11-17 18:25:29.615169] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:31.602 18:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.602 18:25:29 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.602 18:25:29 -- common/autotest_common.sh@650 -- # local es=0 00:16:31.602 18:25:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.602 18:25:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.602 18:25:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.602 18:25:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.602 18:25:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.602 18:25:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.602 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.602 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:31.602 request: 00:16:31.602 { 00:16:31.602 "name": "nvme", 00:16:31.602 "trtype": "tcp", 00:16:31.602 "traddr": "10.0.0.2", 00:16:31.602 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:31.602 "adrfam": "ipv4", 00:16:31.602 "trsvcid": "8009", 00:16:31.602 "wait_for_attach": true, 00:16:31.602 "method": "bdev_nvme_start_discovery", 00:16:31.602 "req_id": 1 00:16:31.602 } 00:16:31.602 Got JSON-RPC error response 00:16:31.602 response: 00:16:31.602 { 00:16:31.602 "code": -17, 00:16:31.602 "message": "File exists" 00:16:31.602 } 00:16:31.602 18:25:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:31.602 18:25:29 -- common/autotest_common.sh@653 -- # es=1 00:16:31.602 18:25:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.602 18:25:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.602 18:25:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.602 18:25:29 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:31.602 18:25:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:31.602 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.602 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # sort 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # xargs 00:16:31.603 18:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.603 18:25:29 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:31.603 18:25:29 -- host/discovery.sh@147 -- # get_bdev_list 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.603 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.603 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # sort 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # xargs 00:16:31.603 18:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.603 18:25:29 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.603 18:25:29 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.603 18:25:29 -- common/autotest_common.sh@650 -- # local es=0 00:16:31.603 18:25:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.603 18:25:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.603 18:25:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.603 18:25:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.603 18:25:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.603 18:25:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.603 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.603 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:31.603 request: 00:16:31.603 { 00:16:31.603 "name": "nvme_second", 00:16:31.603 "trtype": "tcp", 00:16:31.603 "traddr": "10.0.0.2", 00:16:31.603 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:31.603 "adrfam": "ipv4", 00:16:31.603 "trsvcid": "8009", 00:16:31.603 "wait_for_attach": true, 00:16:31.603 "method": "bdev_nvme_start_discovery", 00:16:31.603 "req_id": 1 00:16:31.603 } 00:16:31.603 Got JSON-RPC error response 00:16:31.603 response: 00:16:31.603 { 00:16:31.603 "code": -17, 00:16:31.603 "message": "File exists" 00:16:31.603 } 00:16:31.603 18:25:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:31.603 18:25:29 -- common/autotest_common.sh@653 -- # es=1 00:16:31.603 18:25:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.603 18:25:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.603 18:25:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.603 18:25:29 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # sort 00:16:31.603 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.603 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:31.603 18:25:29 -- host/discovery.sh@67 -- # xargs 00:16:31.603 18:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.603 18:25:29 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:31.603 18:25:29 -- host/discovery.sh@153 -- # get_bdev_list 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.603 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.603 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # sort 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # xargs 00:16:31.603 18:25:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.603 18:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.603 18:25:29 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.603 18:25:29 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.603 18:25:29 -- common/autotest_common.sh@650 -- # local es=0 00:16:31.603 18:25:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.603 18:25:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.603 18:25:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.603 18:25:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.603 18:25:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.603 18:25:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.603 18:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.603 18:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 [2024-11-17 18:25:30.873006] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:32.978 [2024-11-17 18:25:30.873154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:32.978 [2024-11-17 18:25:30.873201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:32.978 [2024-11-17 18:25:30.873233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc50350 with addr=10.0.0.2, port=8010 00:16:32.978 [2024-11-17 18:25:30.873252] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:32.978 [2024-11-17 18:25:30.873261] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:32.978 [2024-11-17 18:25:30.873271] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:33.913 [2024-11-17 18:25:31.873013] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:33.913 [2024-11-17 18:25:31.873140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:33.913 [2024-11-17 18:25:31.873183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:33.913 [2024-11-17 18:25:31.873198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc50350 with addr=10.0.0.2, port=8010 00:16:33.913 [2024-11-17 18:25:31.873215] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:33.913 [2024-11-17 18:25:31.873225] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:33.913 [2024-11-17 18:25:31.873233] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:34.848 [2024-11-17 18:25:32.872879] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:34.848 request: 00:16:34.848 { 00:16:34.848 "name": "nvme_second", 00:16:34.848 "trtype": "tcp", 00:16:34.848 "traddr": "10.0.0.2", 00:16:34.848 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:34.848 "adrfam": "ipv4", 00:16:34.848 "trsvcid": "8010", 00:16:34.848 "attach_timeout_ms": 3000, 00:16:34.848 "method": "bdev_nvme_start_discovery", 00:16:34.848 "req_id": 1 00:16:34.848 } 00:16:34.848 Got JSON-RPC error response 00:16:34.848 response: 00:16:34.848 { 00:16:34.848 "code": -110, 00:16:34.848 "message": "Connection timed out" 00:16:34.848 } 00:16:34.848 18:25:32 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:34.848 18:25:32 -- common/autotest_common.sh@653 -- # es=1 00:16:34.848 18:25:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.848 18:25:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.848 18:25:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.848 18:25:32 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:34.848 18:25:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:34.848 18:25:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.848 18:25:32 -- common/autotest_common.sh@10 -- # set +x 00:16:34.848 18:25:32 -- host/discovery.sh@67 -- # sort 00:16:34.848 18:25:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:34.848 18:25:32 -- host/discovery.sh@67 -- # xargs 00:16:34.848 18:25:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.848 18:25:32 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:34.848 18:25:32 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:34.848 18:25:32 -- host/discovery.sh@162 -- # kill 82218 00:16:34.848 18:25:32 -- host/discovery.sh@163 -- # nvmftestfini 00:16:34.848 18:25:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.848 18:25:32 -- nvmf/common.sh@116 -- # sync 00:16:34.848 18:25:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:34.848 18:25:32 -- nvmf/common.sh@119 -- # set +e 00:16:34.848 18:25:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.848 18:25:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:34.848 rmmod nvme_tcp 00:16:34.848 rmmod nvme_fabrics 00:16:34.848 rmmod nvme_keyring 00:16:34.848 18:25:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.848 18:25:33 -- nvmf/common.sh@123 -- # set -e 00:16:34.848 18:25:33 -- nvmf/common.sh@124 -- # return 0 00:16:34.848 18:25:33 -- nvmf/common.sh@477 -- # '[' -n 82186 ']' 00:16:34.848 18:25:33 -- nvmf/common.sh@478 -- # killprocess 82186 00:16:34.848 18:25:33 -- common/autotest_common.sh@936 -- # '[' -z 82186 ']' 00:16:34.848 18:25:33 -- common/autotest_common.sh@940 -- # kill -0 82186 00:16:34.848 18:25:33 -- common/autotest_common.sh@941 -- # uname 00:16:34.848 18:25:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.848 18:25:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82186 00:16:34.848 18:25:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:34.848 18:25:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:34.848 killing process with pid 82186 00:16:34.848 18:25:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82186' 00:16:34.848 18:25:33 -- common/autotest_common.sh@955 -- # kill 82186 00:16:34.848 18:25:33 -- common/autotest_common.sh@960 -- # wait 82186 00:16:35.107 18:25:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:35.107 18:25:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:35.107 18:25:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:35.107 18:25:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.107 18:25:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:35.107 18:25:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.108 18:25:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.108 18:25:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.108 18:25:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:35.108 00:16:35.108 real 0m13.189s 00:16:35.108 user 0m24.889s 00:16:35.108 sys 0m2.185s 00:16:35.108 ************************************ 00:16:35.108 END TEST nvmf_discovery 00:16:35.108 ************************************ 00:16:35.108 18:25:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:35.108 18:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.108 18:25:33 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:35.108 18:25:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:35.108 18:25:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.108 18:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.108 ************************************ 00:16:35.108 START TEST nvmf_discovery_remove_ifc 00:16:35.108 ************************************ 00:16:35.108 18:25:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:35.108 * Looking for test storage... 00:16:35.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.367 18:25:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:35.367 18:25:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:35.367 18:25:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:35.367 18:25:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:35.367 18:25:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:35.367 18:25:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:35.367 18:25:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:35.367 18:25:33 -- scripts/common.sh@335 -- # IFS=.-: 00:16:35.367 18:25:33 -- scripts/common.sh@335 -- # read -ra ver1 00:16:35.367 18:25:33 -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.367 18:25:33 -- scripts/common.sh@336 -- # read -ra ver2 00:16:35.367 18:25:33 -- scripts/common.sh@337 -- # local 'op=<' 00:16:35.367 18:25:33 -- scripts/common.sh@339 -- # ver1_l=2 00:16:35.367 18:25:33 -- scripts/common.sh@340 -- # ver2_l=1 00:16:35.367 18:25:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:35.367 18:25:33 -- scripts/common.sh@343 -- # case "$op" in 00:16:35.367 18:25:33 -- scripts/common.sh@344 -- # : 1 00:16:35.367 18:25:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:35.367 18:25:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.367 18:25:33 -- scripts/common.sh@364 -- # decimal 1 00:16:35.367 18:25:33 -- scripts/common.sh@352 -- # local d=1 00:16:35.367 18:25:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.367 18:25:33 -- scripts/common.sh@354 -- # echo 1 00:16:35.367 18:25:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:35.367 18:25:33 -- scripts/common.sh@365 -- # decimal 2 00:16:35.367 18:25:33 -- scripts/common.sh@352 -- # local d=2 00:16:35.367 18:25:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.367 18:25:33 -- scripts/common.sh@354 -- # echo 2 00:16:35.367 18:25:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:35.367 18:25:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:35.367 18:25:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:35.367 18:25:33 -- scripts/common.sh@367 -- # return 0 00:16:35.367 18:25:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.367 18:25:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:35.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.367 --rc genhtml_branch_coverage=1 00:16:35.367 --rc genhtml_function_coverage=1 00:16:35.367 --rc genhtml_legend=1 00:16:35.367 --rc geninfo_all_blocks=1 00:16:35.367 --rc geninfo_unexecuted_blocks=1 00:16:35.367 00:16:35.367 ' 00:16:35.367 18:25:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:35.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.367 --rc genhtml_branch_coverage=1 00:16:35.367 --rc genhtml_function_coverage=1 00:16:35.367 --rc genhtml_legend=1 00:16:35.367 --rc geninfo_all_blocks=1 00:16:35.367 --rc geninfo_unexecuted_blocks=1 00:16:35.367 00:16:35.367 ' 00:16:35.367 18:25:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:35.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.367 --rc genhtml_branch_coverage=1 00:16:35.367 --rc genhtml_function_coverage=1 00:16:35.367 --rc genhtml_legend=1 00:16:35.367 --rc geninfo_all_blocks=1 00:16:35.367 --rc geninfo_unexecuted_blocks=1 00:16:35.367 00:16:35.367 ' 00:16:35.367 18:25:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:35.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.367 --rc genhtml_branch_coverage=1 00:16:35.367 --rc genhtml_function_coverage=1 00:16:35.367 --rc genhtml_legend=1 00:16:35.367 --rc geninfo_all_blocks=1 00:16:35.367 --rc geninfo_unexecuted_blocks=1 00:16:35.367 00:16:35.367 ' 00:16:35.367 18:25:33 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.367 18:25:33 -- nvmf/common.sh@7 -- # uname -s 00:16:35.367 18:25:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.367 18:25:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.367 18:25:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.367 18:25:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.367 18:25:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.367 18:25:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.367 18:25:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.367 18:25:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.367 18:25:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.367 18:25:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.367 18:25:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:16:35.367 18:25:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:16:35.367 18:25:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.367 18:25:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.367 18:25:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.367 18:25:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.367 18:25:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.367 18:25:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.367 18:25:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.367 18:25:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.367 18:25:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.368 18:25:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.368 18:25:33 -- paths/export.sh@5 -- # export PATH 00:16:35.368 18:25:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.368 18:25:33 -- nvmf/common.sh@46 -- # : 0 00:16:35.368 18:25:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:35.368 18:25:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:35.368 18:25:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:35.368 18:25:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.368 18:25:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.368 18:25:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:35.368 18:25:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:35.368 18:25:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:35.368 18:25:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:35.368 18:25:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:35.368 18:25:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.368 18:25:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:35.368 18:25:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:35.368 18:25:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:35.368 18:25:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.368 18:25:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.368 18:25:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.368 18:25:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:35.368 18:25:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:35.368 18:25:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:35.368 18:25:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:35.368 18:25:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:35.368 18:25:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:35.368 18:25:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.368 18:25:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.368 18:25:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.368 18:25:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:35.368 18:25:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.368 18:25:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.368 18:25:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.368 18:25:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.368 18:25:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.368 18:25:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.368 18:25:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.368 18:25:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.368 18:25:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:35.368 18:25:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:35.368 Cannot find device "nvmf_tgt_br" 00:16:35.368 18:25:33 -- nvmf/common.sh@154 -- # true 00:16:35.368 18:25:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.368 Cannot find device "nvmf_tgt_br2" 00:16:35.368 18:25:33 -- nvmf/common.sh@155 -- # true 00:16:35.368 18:25:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:35.368 18:25:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:35.368 Cannot find device "nvmf_tgt_br" 00:16:35.368 18:25:33 -- nvmf/common.sh@157 -- # true 00:16:35.368 18:25:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:35.368 Cannot find device "nvmf_tgt_br2" 00:16:35.368 18:25:33 -- nvmf/common.sh@158 -- # true 00:16:35.368 18:25:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:35.627 18:25:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:35.627 18:25:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.627 18:25:33 -- nvmf/common.sh@161 -- # true 00:16:35.627 18:25:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.627 18:25:33 -- nvmf/common.sh@162 -- # true 00:16:35.627 18:25:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.627 18:25:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.627 18:25:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.627 18:25:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.627 18:25:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.627 18:25:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.627 18:25:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.627 18:25:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.627 18:25:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.627 18:25:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:35.627 18:25:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:35.627 18:25:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:35.627 18:25:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:35.627 18:25:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.627 18:25:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.627 18:25:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.627 18:25:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:35.627 18:25:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:35.627 18:25:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.627 18:25:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.627 18:25:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.627 18:25:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.627 18:25:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.627 18:25:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:35.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:35.627 00:16:35.627 --- 10.0.0.2 ping statistics --- 00:16:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.627 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:35.627 18:25:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:35.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:35.627 00:16:35.627 --- 10.0.0.3 ping statistics --- 00:16:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.627 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:35.627 18:25:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:35.627 00:16:35.627 --- 10.0.0.1 ping statistics --- 00:16:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.627 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:35.627 18:25:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.627 18:25:33 -- nvmf/common.sh@421 -- # return 0 00:16:35.627 18:25:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.627 18:25:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.627 18:25:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.627 18:25:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.627 18:25:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.627 18:25:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.627 18:25:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.627 18:25:33 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:35.627 18:25:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.627 18:25:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.627 18:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.627 18:25:33 -- nvmf/common.sh@469 -- # nvmfpid=82707 00:16:35.627 18:25:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:35.627 18:25:33 -- nvmf/common.sh@470 -- # waitforlisten 82707 00:16:35.627 18:25:33 -- common/autotest_common.sh@829 -- # '[' -z 82707 ']' 00:16:35.627 18:25:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.627 18:25:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.627 18:25:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.627 18:25:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.627 18:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.886 [2024-11-17 18:25:33.932750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:35.886 [2024-11-17 18:25:33.932854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.886 [2024-11-17 18:25:34.074049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.886 [2024-11-17 18:25:34.112706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.886 [2024-11-17 18:25:34.112874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.886 [2024-11-17 18:25:34.112890] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.886 [2024-11-17 18:25:34.112900] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.886 [2024-11-17 18:25:34.112936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.823 18:25:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.823 18:25:34 -- common/autotest_common.sh@862 -- # return 0 00:16:36.823 18:25:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:36.823 18:25:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.823 18:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:36.823 18:25:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.823 18:25:34 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:36.823 18:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.823 18:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:36.823 [2024-11-17 18:25:34.958360] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.823 [2024-11-17 18:25:34.966555] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:36.823 null0 00:16:36.823 [2024-11-17 18:25:34.998423] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.823 18:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.823 18:25:35 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82739 00:16:36.823 18:25:35 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:36.823 18:25:35 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82739 /tmp/host.sock 00:16:36.823 18:25:35 -- common/autotest_common.sh@829 -- # '[' -z 82739 ']' 00:16:36.823 18:25:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:36.823 18:25:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.823 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:36.823 18:25:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:36.823 18:25:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.823 18:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:36.823 [2024-11-17 18:25:35.072160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:36.823 [2024-11-17 18:25:35.072302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82739 ] 00:16:37.082 [2024-11-17 18:25:35.213831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.082 [2024-11-17 18:25:35.255187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.082 [2024-11-17 18:25:35.255418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.082 18:25:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.082 18:25:35 -- common/autotest_common.sh@862 -- # return 0 00:16:37.082 18:25:35 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.082 18:25:35 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:37.082 18:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.082 18:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:37.082 18:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.082 18:25:35 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:37.082 18:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.082 18:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:37.341 18:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.341 18:25:35 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:37.341 18:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.341 18:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:38.277 [2024-11-17 18:25:36.401290] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:38.277 [2024-11-17 18:25:36.401357] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:38.277 [2024-11-17 18:25:36.401376] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:38.277 [2024-11-17 18:25:36.407339] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:38.277 [2024-11-17 18:25:36.463045] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:38.277 [2024-11-17 18:25:36.463109] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:38.277 [2024-11-17 18:25:36.463135] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:38.277 [2024-11-17 18:25:36.463149] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:38.277 [2024-11-17 18:25:36.463173] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:38.277 18:25:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.277 18:25:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.277 18:25:36 -- common/autotest_common.sh@10 -- # set +x 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.277 [2024-11-17 18:25:36.469939] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b0caf0 was disconnected and freed. delete nvme_qpair. 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.277 18:25:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.277 18:25:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.277 18:25:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.277 18:25:36 -- common/autotest_common.sh@10 -- # set +x 00:16:38.536 18:25:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.536 18:25:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:38.536 18:25:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.472 18:25:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.472 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:16:39.472 18:25:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:39.472 18:25:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:40.410 18:25:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.410 18:25:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.410 18:25:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.410 18:25:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.410 18:25:38 -- common/autotest_common.sh@10 -- # set +x 00:16:40.410 18:25:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.410 18:25:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.669 18:25:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.669 18:25:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:40.669 18:25:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:41.603 18:25:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:41.603 18:25:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.603 18:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.603 18:25:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:41.603 18:25:39 -- common/autotest_common.sh@10 -- # set +x 00:16:41.603 18:25:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:41.604 18:25:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:41.604 18:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.604 18:25:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:41.604 18:25:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.538 18:25:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.538 18:25:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.538 18:25:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.538 18:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:42.538 18:25:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.538 18:25:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.538 18:25:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.796 18:25:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.796 18:25:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:42.796 18:25:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:43.730 18:25:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:43.730 18:25:41 -- common/autotest_common.sh@10 -- # set +x 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:43.730 18:25:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.730 [2024-11-17 18:25:41.891264] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:43.730 [2024-11-17 18:25:41.891381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.730 [2024-11-17 18:25:41.891398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.730 [2024-11-17 18:25:41.891410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.730 [2024-11-17 18:25:41.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.730 [2024-11-17 18:25:41.891428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.730 [2024-11-17 18:25:41.891436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.730 [2024-11-17 18:25:41.891446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.730 [2024-11-17 18:25:41.891455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.730 [2024-11-17 18:25:41.891465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.730 [2024-11-17 18:25:41.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.730 [2024-11-17 18:25:41.891482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1890 is same with the state(5) to be set 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:43.730 18:25:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:43.730 [2024-11-17 18:25:41.901260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1890 (9): Bad file descriptor 00:16:43.730 [2024-11-17 18:25:41.911279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.715 18:25:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:44.715 18:25:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.715 18:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.715 18:25:42 -- common/autotest_common.sh@10 -- # set +x 00:16:44.715 18:25:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:44.715 18:25:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:44.715 18:25:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:44.715 [2024-11-17 18:25:42.957401] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:46.106 [2024-11-17 18:25:43.981419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:47.042 [2024-11-17 18:25:45.005396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:47.042 [2024-11-17 18:25:45.005542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad1890 with addr=10.0.0.2, port=4420 00:16:47.042 [2024-11-17 18:25:45.005574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1890 is same with the state(5) to be set 00:16:47.042 [2024-11-17 18:25:45.005648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:47.042 [2024-11-17 18:25:45.005671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:47.042 [2024-11-17 18:25:45.005689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:47.042 [2024-11-17 18:25:45.005710] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:47.042 [2024-11-17 18:25:45.006614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1890 (9): Bad file descriptor 00:16:47.042 [2024-11-17 18:25:45.006694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:47.042 [2024-11-17 18:25:45.006748] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:47.042 [2024-11-17 18:25:45.006825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.042 [2024-11-17 18:25:45.006863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.042 [2024-11-17 18:25:45.006902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.042 [2024-11-17 18:25:45.006923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.042 [2024-11-17 18:25:45.006960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.042 [2024-11-17 18:25:45.006979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.042 [2024-11-17 18:25:45.007000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.042 [2024-11-17 18:25:45.007020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.042 [2024-11-17 18:25:45.007042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.042 [2024-11-17 18:25:45.007062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.042 [2024-11-17 18:25:45.007081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:47.042 [2024-11-17 18:25:45.007111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0ef0 (9): Bad file descriptor 00:16:47.042 [2024-11-17 18:25:45.007724] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:47.042 [2024-11-17 18:25:45.007772] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:47.042 18:25:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.042 18:25:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:47.043 18:25:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.977 18:25:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.977 18:25:46 -- common/autotest_common.sh@10 -- # set +x 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.977 18:25:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.977 18:25:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.977 18:25:46 -- common/autotest_common.sh@10 -- # set +x 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.977 18:25:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:47.977 18:25:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.931 [2024-11-17 18:25:47.017087] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:48.931 [2024-11-17 18:25:47.017117] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:48.931 [2024-11-17 18:25:47.017152] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:48.931 [2024-11-17 18:25:47.023135] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:48.931 [2024-11-17 18:25:47.078075] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:48.931 [2024-11-17 18:25:47.078140] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:48.931 [2024-11-17 18:25:47.078163] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:48.931 [2024-11-17 18:25:47.078177] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:48.931 [2024-11-17 18:25:47.078185] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:48.931 [2024-11-17 18:25:47.085630] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ac0e30 was disconnected and freed. delete nvme_qpair. 00:16:48.931 18:25:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.931 18:25:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.931 18:25:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:48.931 18:25:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.931 18:25:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.931 18:25:47 -- common/autotest_common.sh@10 -- # set +x 00:16:48.931 18:25:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.931 18:25:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.190 18:25:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:49.190 18:25:47 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:49.190 18:25:47 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82739 00:16:49.190 18:25:47 -- common/autotest_common.sh@936 -- # '[' -z 82739 ']' 00:16:49.190 18:25:47 -- common/autotest_common.sh@940 -- # kill -0 82739 00:16:49.190 18:25:47 -- common/autotest_common.sh@941 -- # uname 00:16:49.190 18:25:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.190 18:25:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82739 00:16:49.190 killing process with pid 82739 00:16:49.190 18:25:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:49.190 18:25:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:49.190 18:25:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82739' 00:16:49.190 18:25:47 -- common/autotest_common.sh@955 -- # kill 82739 00:16:49.190 18:25:47 -- common/autotest_common.sh@960 -- # wait 82739 00:16:49.190 18:25:47 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:49.190 18:25:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:49.190 18:25:47 -- nvmf/common.sh@116 -- # sync 00:16:49.190 18:25:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:49.190 18:25:47 -- nvmf/common.sh@119 -- # set +e 00:16:49.190 18:25:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:49.190 18:25:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:49.449 rmmod nvme_tcp 00:16:49.449 rmmod nvme_fabrics 00:16:49.449 rmmod nvme_keyring 00:16:49.449 18:25:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:49.449 18:25:47 -- nvmf/common.sh@123 -- # set -e 00:16:49.449 18:25:47 -- nvmf/common.sh@124 -- # return 0 00:16:49.449 18:25:47 -- nvmf/common.sh@477 -- # '[' -n 82707 ']' 00:16:49.449 18:25:47 -- nvmf/common.sh@478 -- # killprocess 82707 00:16:49.449 18:25:47 -- common/autotest_common.sh@936 -- # '[' -z 82707 ']' 00:16:49.449 18:25:47 -- common/autotest_common.sh@940 -- # kill -0 82707 00:16:49.449 18:25:47 -- common/autotest_common.sh@941 -- # uname 00:16:49.449 18:25:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.449 18:25:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82707 00:16:49.449 killing process with pid 82707 00:16:49.449 18:25:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:49.449 18:25:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:49.449 18:25:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82707' 00:16:49.449 18:25:47 -- common/autotest_common.sh@955 -- # kill 82707 00:16:49.449 18:25:47 -- common/autotest_common.sh@960 -- # wait 82707 00:16:49.449 18:25:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:49.449 18:25:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:49.449 18:25:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:49.449 18:25:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.449 18:25:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:49.449 18:25:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.449 18:25:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.449 18:25:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.449 18:25:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:49.449 00:16:49.449 real 0m14.410s 00:16:49.449 user 0m22.702s 00:16:49.449 sys 0m2.393s 00:16:49.449 18:25:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:49.708 18:25:47 -- common/autotest_common.sh@10 -- # set +x 00:16:49.708 ************************************ 00:16:49.708 END TEST nvmf_discovery_remove_ifc 00:16:49.708 ************************************ 00:16:49.708 18:25:47 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:49.708 18:25:47 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:49.708 18:25:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.708 18:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.708 18:25:47 -- common/autotest_common.sh@10 -- # set +x 00:16:49.708 ************************************ 00:16:49.708 START TEST nvmf_digest 00:16:49.708 ************************************ 00:16:49.708 18:25:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:49.708 * Looking for test storage... 00:16:49.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:49.708 18:25:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:49.708 18:25:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:49.708 18:25:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:49.708 18:25:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:49.708 18:25:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:49.708 18:25:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:49.708 18:25:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:49.708 18:25:47 -- scripts/common.sh@335 -- # IFS=.-: 00:16:49.708 18:25:47 -- scripts/common.sh@335 -- # read -ra ver1 00:16:49.708 18:25:47 -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.708 18:25:47 -- scripts/common.sh@336 -- # read -ra ver2 00:16:49.708 18:25:47 -- scripts/common.sh@337 -- # local 'op=<' 00:16:49.708 18:25:47 -- scripts/common.sh@339 -- # ver1_l=2 00:16:49.708 18:25:47 -- scripts/common.sh@340 -- # ver2_l=1 00:16:49.708 18:25:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:49.708 18:25:47 -- scripts/common.sh@343 -- # case "$op" in 00:16:49.708 18:25:47 -- scripts/common.sh@344 -- # : 1 00:16:49.708 18:25:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:49.708 18:25:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.708 18:25:47 -- scripts/common.sh@364 -- # decimal 1 00:16:49.708 18:25:47 -- scripts/common.sh@352 -- # local d=1 00:16:49.708 18:25:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.708 18:25:47 -- scripts/common.sh@354 -- # echo 1 00:16:49.708 18:25:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:49.708 18:25:47 -- scripts/common.sh@365 -- # decimal 2 00:16:49.708 18:25:47 -- scripts/common.sh@352 -- # local d=2 00:16:49.708 18:25:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.708 18:25:47 -- scripts/common.sh@354 -- # echo 2 00:16:49.708 18:25:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:49.708 18:25:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:49.708 18:25:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:49.708 18:25:47 -- scripts/common.sh@367 -- # return 0 00:16:49.708 18:25:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.708 18:25:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.708 --rc genhtml_branch_coverage=1 00:16:49.708 --rc genhtml_function_coverage=1 00:16:49.708 --rc genhtml_legend=1 00:16:49.708 --rc geninfo_all_blocks=1 00:16:49.708 --rc geninfo_unexecuted_blocks=1 00:16:49.708 00:16:49.708 ' 00:16:49.708 18:25:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.708 --rc genhtml_branch_coverage=1 00:16:49.708 --rc genhtml_function_coverage=1 00:16:49.708 --rc genhtml_legend=1 00:16:49.708 --rc geninfo_all_blocks=1 00:16:49.708 --rc geninfo_unexecuted_blocks=1 00:16:49.708 00:16:49.708 ' 00:16:49.708 18:25:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.708 --rc genhtml_branch_coverage=1 00:16:49.708 --rc genhtml_function_coverage=1 00:16:49.708 --rc genhtml_legend=1 00:16:49.708 --rc geninfo_all_blocks=1 00:16:49.708 --rc geninfo_unexecuted_blocks=1 00:16:49.708 00:16:49.708 ' 00:16:49.708 18:25:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.708 --rc genhtml_branch_coverage=1 00:16:49.708 --rc genhtml_function_coverage=1 00:16:49.708 --rc genhtml_legend=1 00:16:49.708 --rc geninfo_all_blocks=1 00:16:49.708 --rc geninfo_unexecuted_blocks=1 00:16:49.708 00:16:49.708 ' 00:16:49.708 18:25:47 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.708 18:25:47 -- nvmf/common.sh@7 -- # uname -s 00:16:49.708 18:25:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.708 18:25:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.708 18:25:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.708 18:25:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.708 18:25:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.708 18:25:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.708 18:25:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.708 18:25:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.708 18:25:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.708 18:25:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.708 18:25:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:16:49.708 18:25:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:16:49.709 18:25:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.709 18:25:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.709 18:25:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.709 18:25:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.709 18:25:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.709 18:25:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.709 18:25:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.709 18:25:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.709 18:25:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.709 18:25:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.709 18:25:47 -- paths/export.sh@5 -- # export PATH 00:16:49.709 18:25:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.709 18:25:47 -- nvmf/common.sh@46 -- # : 0 00:16:49.709 18:25:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.709 18:25:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.709 18:25:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.709 18:25:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.709 18:25:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.709 18:25:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.709 18:25:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.709 18:25:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.709 18:25:47 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:49.709 18:25:47 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:49.709 18:25:47 -- host/digest.sh@16 -- # runtime=2 00:16:49.709 18:25:47 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:49.709 18:25:47 -- host/digest.sh@132 -- # nvmftestinit 00:16:49.709 18:25:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.709 18:25:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.709 18:25:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.709 18:25:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.709 18:25:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.709 18:25:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.709 18:25:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.709 18:25:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.709 18:25:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:49.709 18:25:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:49.709 18:25:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:49.709 18:25:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:49.709 18:25:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:49.709 18:25:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:49.709 18:25:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.709 18:25:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.709 18:25:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:49.709 18:25:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:49.709 18:25:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.709 18:25:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.709 18:25:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.709 18:25:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.709 18:25:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.709 18:25:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.709 18:25:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.709 18:25:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.709 18:25:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:49.968 18:25:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:49.968 Cannot find device "nvmf_tgt_br" 00:16:49.968 18:25:48 -- nvmf/common.sh@154 -- # true 00:16:49.968 18:25:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.968 Cannot find device "nvmf_tgt_br2" 00:16:49.968 18:25:48 -- nvmf/common.sh@155 -- # true 00:16:49.968 18:25:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:49.968 18:25:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:49.968 Cannot find device "nvmf_tgt_br" 00:16:49.968 18:25:48 -- nvmf/common.sh@157 -- # true 00:16:49.968 18:25:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:49.968 Cannot find device "nvmf_tgt_br2" 00:16:49.968 18:25:48 -- nvmf/common.sh@158 -- # true 00:16:49.968 18:25:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:49.968 18:25:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:49.968 18:25:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.968 18:25:48 -- nvmf/common.sh@161 -- # true 00:16:49.968 18:25:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.968 18:25:48 -- nvmf/common.sh@162 -- # true 00:16:49.968 18:25:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.968 18:25:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.968 18:25:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.968 18:25:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.968 18:25:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.968 18:25:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.968 18:25:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.968 18:25:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:49.968 18:25:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:49.968 18:25:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:49.968 18:25:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:49.968 18:25:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:49.968 18:25:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:49.968 18:25:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.968 18:25:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.968 18:25:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.968 18:25:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:50.225 18:25:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:50.225 18:25:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.225 18:25:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.225 18:25:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.225 18:25:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.225 18:25:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.225 18:25:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:50.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:50.225 00:16:50.225 --- 10.0.0.2 ping statistics --- 00:16:50.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.225 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:50.225 18:25:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:50.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:50.225 00:16:50.225 --- 10.0.0.3 ping statistics --- 00:16:50.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.225 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:50.225 18:25:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:50.225 00:16:50.225 --- 10.0.0.1 ping statistics --- 00:16:50.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.225 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:50.225 18:25:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.225 18:25:48 -- nvmf/common.sh@421 -- # return 0 00:16:50.225 18:25:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:50.225 18:25:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.225 18:25:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:50.225 18:25:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:50.225 18:25:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.225 18:25:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:50.225 18:25:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:50.225 18:25:48 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:50.225 18:25:48 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:50.225 18:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:50.225 18:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.225 18:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:50.225 ************************************ 00:16:50.225 START TEST nvmf_digest_clean 00:16:50.225 ************************************ 00:16:50.225 18:25:48 -- common/autotest_common.sh@1114 -- # run_digest 00:16:50.225 18:25:48 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:50.225 18:25:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.225 18:25:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.225 18:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:50.225 18:25:48 -- nvmf/common.sh@469 -- # nvmfpid=83162 00:16:50.225 18:25:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:50.225 18:25:48 -- nvmf/common.sh@470 -- # waitforlisten 83162 00:16:50.225 18:25:48 -- common/autotest_common.sh@829 -- # '[' -z 83162 ']' 00:16:50.225 18:25:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.225 18:25:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.225 18:25:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.225 18:25:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.225 18:25:48 -- common/autotest_common.sh@10 -- # set +x 00:16:50.225 [2024-11-17 18:25:48.393929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:50.225 [2024-11-17 18:25:48.394042] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.483 [2024-11-17 18:25:48.535113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.483 [2024-11-17 18:25:48.576185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.483 [2024-11-17 18:25:48.576375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.483 [2024-11-17 18:25:48.576392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.483 [2024-11-17 18:25:48.576403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.483 [2024-11-17 18:25:48.576443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.416 18:25:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.416 18:25:49 -- common/autotest_common.sh@862 -- # return 0 00:16:51.416 18:25:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:51.416 18:25:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.416 18:25:49 -- common/autotest_common.sh@10 -- # set +x 00:16:51.416 18:25:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.416 18:25:49 -- host/digest.sh@120 -- # common_target_config 00:16:51.416 18:25:49 -- host/digest.sh@43 -- # rpc_cmd 00:16:51.416 18:25:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.416 18:25:49 -- common/autotest_common.sh@10 -- # set +x 00:16:51.416 null0 00:16:51.416 [2024-11-17 18:25:49.446899] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.416 [2024-11-17 18:25:49.471078] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.416 18:25:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.416 18:25:49 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:51.416 18:25:49 -- host/digest.sh@77 -- # local rw bs qd 00:16:51.416 18:25:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:51.416 18:25:49 -- host/digest.sh@80 -- # rw=randread 00:16:51.416 18:25:49 -- host/digest.sh@80 -- # bs=4096 00:16:51.416 18:25:49 -- host/digest.sh@80 -- # qd=128 00:16:51.416 18:25:49 -- host/digest.sh@82 -- # bperfpid=83194 00:16:51.416 18:25:49 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:51.416 18:25:49 -- host/digest.sh@83 -- # waitforlisten 83194 /var/tmp/bperf.sock 00:16:51.416 18:25:49 -- common/autotest_common.sh@829 -- # '[' -z 83194 ']' 00:16:51.416 18:25:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:51.416 18:25:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:51.416 18:25:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:51.416 18:25:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.416 18:25:49 -- common/autotest_common.sh@10 -- # set +x 00:16:51.416 [2024-11-17 18:25:49.519011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:51.416 [2024-11-17 18:25:49.519099] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83194 ] 00:16:51.416 [2024-11-17 18:25:49.654684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.675 [2024-11-17 18:25:49.694378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.675 18:25:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.675 18:25:49 -- common/autotest_common.sh@862 -- # return 0 00:16:51.675 18:25:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:51.675 18:25:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:51.675 18:25:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:51.933 18:25:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.933 18:25:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.191 nvme0n1 00:16:52.191 18:25:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:52.191 18:25:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:52.191 Running I/O for 2 seconds... 00:16:54.722 00:16:54.722 Latency(us) 00:16:54.722 [2024-11-17T18:25:52.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.722 [2024-11-17T18:25:52.989Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:54.722 nvme0n1 : 2.00 16470.71 64.34 0.00 0.00 7766.12 7000.44 18230.92 00:16:54.722 [2024-11-17T18:25:52.989Z] =================================================================================================================== 00:16:54.722 [2024-11-17T18:25:52.989Z] Total : 16470.71 64.34 0.00 0.00 7766.12 7000.44 18230.92 00:16:54.722 0 00:16:54.722 18:25:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:54.722 18:25:52 -- host/digest.sh@92 -- # get_accel_stats 00:16:54.722 18:25:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:54.722 | select(.opcode=="crc32c") 00:16:54.722 | "\(.module_name) \(.executed)"' 00:16:54.722 18:25:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:54.722 18:25:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:54.722 18:25:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:54.722 18:25:52 -- host/digest.sh@93 -- # exp_module=software 00:16:54.722 18:25:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:54.722 18:25:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:54.722 18:25:52 -- host/digest.sh@97 -- # killprocess 83194 00:16:54.722 18:25:52 -- common/autotest_common.sh@936 -- # '[' -z 83194 ']' 00:16:54.722 18:25:52 -- common/autotest_common.sh@940 -- # kill -0 83194 00:16:54.722 18:25:52 -- common/autotest_common.sh@941 -- # uname 00:16:54.722 18:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.722 18:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83194 00:16:54.722 18:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:54.722 18:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:54.722 killing process with pid 83194 00:16:54.722 18:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83194' 00:16:54.722 Received shutdown signal, test time was about 2.000000 seconds 00:16:54.722 00:16:54.722 Latency(us) 00:16:54.722 [2024-11-17T18:25:52.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.722 [2024-11-17T18:25:52.989Z] =================================================================================================================== 00:16:54.722 [2024-11-17T18:25:52.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.722 18:25:52 -- common/autotest_common.sh@955 -- # kill 83194 00:16:54.722 18:25:52 -- common/autotest_common.sh@960 -- # wait 83194 00:16:54.722 18:25:52 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:54.722 18:25:52 -- host/digest.sh@77 -- # local rw bs qd 00:16:54.722 18:25:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:54.722 18:25:52 -- host/digest.sh@80 -- # rw=randread 00:16:54.722 18:25:52 -- host/digest.sh@80 -- # bs=131072 00:16:54.722 18:25:52 -- host/digest.sh@80 -- # qd=16 00:16:54.722 18:25:52 -- host/digest.sh@82 -- # bperfpid=83241 00:16:54.722 18:25:52 -- host/digest.sh@83 -- # waitforlisten 83241 /var/tmp/bperf.sock 00:16:54.722 18:25:52 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:54.722 18:25:52 -- common/autotest_common.sh@829 -- # '[' -z 83241 ']' 00:16:54.722 18:25:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:54.722 18:25:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:54.723 18:25:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:54.723 18:25:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.723 18:25:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:54.723 Zero copy mechanism will not be used. 00:16:54.723 [2024-11-17 18:25:52.970414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:54.723 [2024-11-17 18:25:52.970541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83241 ] 00:16:54.981 [2024-11-17 18:25:53.102298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.981 [2024-11-17 18:25:53.136183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.981 18:25:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.981 18:25:53 -- common/autotest_common.sh@862 -- # return 0 00:16:54.981 18:25:53 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:54.981 18:25:53 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:54.981 18:25:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:55.238 18:25:53 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.238 18:25:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.805 nvme0n1 00:16:55.805 18:25:53 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:55.805 18:25:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:55.805 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:55.805 Zero copy mechanism will not be used. 00:16:55.805 Running I/O for 2 seconds... 00:16:57.708 00:16:57.708 Latency(us) 00:16:57.708 [2024-11-17T18:25:55.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.708 [2024-11-17T18:25:55.975Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:57.708 nvme0n1 : 2.00 8144.88 1018.11 0.00 0.00 1961.64 1697.98 9175.04 00:16:57.708 [2024-11-17T18:25:55.975Z] =================================================================================================================== 00:16:57.708 [2024-11-17T18:25:55.975Z] Total : 8144.88 1018.11 0.00 0.00 1961.64 1697.98 9175.04 00:16:57.708 0 00:16:57.708 18:25:55 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:57.708 18:25:55 -- host/digest.sh@92 -- # get_accel_stats 00:16:57.708 18:25:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:57.708 | select(.opcode=="crc32c") 00:16:57.708 | "\(.module_name) \(.executed)"' 00:16:57.708 18:25:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:57.708 18:25:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:57.967 18:25:56 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:57.967 18:25:56 -- host/digest.sh@93 -- # exp_module=software 00:16:57.967 18:25:56 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:57.967 18:25:56 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:57.967 18:25:56 -- host/digest.sh@97 -- # killprocess 83241 00:16:57.967 18:25:56 -- common/autotest_common.sh@936 -- # '[' -z 83241 ']' 00:16:57.967 18:25:56 -- common/autotest_common.sh@940 -- # kill -0 83241 00:16:57.967 18:25:56 -- common/autotest_common.sh@941 -- # uname 00:16:57.967 18:25:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.967 18:25:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83241 00:16:58.227 18:25:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:58.227 killing process with pid 83241 00:16:58.227 18:25:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:58.227 18:25:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83241' 00:16:58.227 Received shutdown signal, test time was about 2.000000 seconds 00:16:58.227 00:16:58.227 Latency(us) 00:16:58.227 [2024-11-17T18:25:56.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.227 [2024-11-17T18:25:56.494Z] =================================================================================================================== 00:16:58.227 [2024-11-17T18:25:56.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.227 18:25:56 -- common/autotest_common.sh@955 -- # kill 83241 00:16:58.227 18:25:56 -- common/autotest_common.sh@960 -- # wait 83241 00:16:58.227 18:25:56 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:58.227 18:25:56 -- host/digest.sh@77 -- # local rw bs qd 00:16:58.227 18:25:56 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:58.227 18:25:56 -- host/digest.sh@80 -- # rw=randwrite 00:16:58.227 18:25:56 -- host/digest.sh@80 -- # bs=4096 00:16:58.227 18:25:56 -- host/digest.sh@80 -- # qd=128 00:16:58.227 18:25:56 -- host/digest.sh@82 -- # bperfpid=83295 00:16:58.227 18:25:56 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:58.227 18:25:56 -- host/digest.sh@83 -- # waitforlisten 83295 /var/tmp/bperf.sock 00:16:58.227 18:25:56 -- common/autotest_common.sh@829 -- # '[' -z 83295 ']' 00:16:58.227 18:25:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:58.227 18:25:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:58.227 18:25:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:58.227 18:25:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.227 18:25:56 -- common/autotest_common.sh@10 -- # set +x 00:16:58.227 [2024-11-17 18:25:56.423836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:58.227 [2024-11-17 18:25:56.423941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83295 ] 00:16:58.487 [2024-11-17 18:25:56.561614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.487 [2024-11-17 18:25:56.594024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.487 18:25:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.487 18:25:56 -- common/autotest_common.sh@862 -- # return 0 00:16:58.487 18:25:56 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:58.487 18:25:56 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:58.487 18:25:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:58.775 18:25:56 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.775 18:25:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:59.041 nvme0n1 00:16:59.041 18:25:57 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:59.041 18:25:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:59.314 Running I/O for 2 seconds... 00:17:01.221 00:17:01.221 Latency(us) 00:17:01.221 [2024-11-17T18:25:59.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.221 [2024-11-17T18:25:59.488Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.221 nvme0n1 : 2.01 17407.53 68.00 0.00 0.00 7347.15 6374.87 15192.44 00:17:01.221 [2024-11-17T18:25:59.488Z] =================================================================================================================== 00:17:01.221 [2024-11-17T18:25:59.488Z] Total : 17407.53 68.00 0.00 0.00 7347.15 6374.87 15192.44 00:17:01.221 0 00:17:01.221 18:25:59 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:01.221 18:25:59 -- host/digest.sh@92 -- # get_accel_stats 00:17:01.221 18:25:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:01.221 18:25:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:01.221 | select(.opcode=="crc32c") 00:17:01.221 | "\(.module_name) \(.executed)"' 00:17:01.221 18:25:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:01.479 18:25:59 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:01.479 18:25:59 -- host/digest.sh@93 -- # exp_module=software 00:17:01.479 18:25:59 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:01.479 18:25:59 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:01.479 18:25:59 -- host/digest.sh@97 -- # killprocess 83295 00:17:01.479 18:25:59 -- common/autotest_common.sh@936 -- # '[' -z 83295 ']' 00:17:01.479 18:25:59 -- common/autotest_common.sh@940 -- # kill -0 83295 00:17:01.479 18:25:59 -- common/autotest_common.sh@941 -- # uname 00:17:01.479 18:25:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.479 18:25:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83295 00:17:01.737 18:25:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:01.737 18:25:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:01.737 killing process with pid 83295 00:17:01.737 18:25:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83295' 00:17:01.737 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.737 00:17:01.737 Latency(us) 00:17:01.737 [2024-11-17T18:26:00.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.737 [2024-11-17T18:26:00.004Z] =================================================================================================================== 00:17:01.737 [2024-11-17T18:26:00.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.737 18:25:59 -- common/autotest_common.sh@955 -- # kill 83295 00:17:01.737 18:25:59 -- common/autotest_common.sh@960 -- # wait 83295 00:17:01.737 18:25:59 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:17:01.737 18:25:59 -- host/digest.sh@77 -- # local rw bs qd 00:17:01.737 18:25:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:01.737 18:25:59 -- host/digest.sh@80 -- # rw=randwrite 00:17:01.737 18:25:59 -- host/digest.sh@80 -- # bs=131072 00:17:01.737 18:25:59 -- host/digest.sh@80 -- # qd=16 00:17:01.737 18:25:59 -- host/digest.sh@82 -- # bperfpid=83343 00:17:01.737 18:25:59 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:01.737 18:25:59 -- host/digest.sh@83 -- # waitforlisten 83343 /var/tmp/bperf.sock 00:17:01.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.737 18:25:59 -- common/autotest_common.sh@829 -- # '[' -z 83343 ']' 00:17:01.737 18:25:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.737 18:25:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.737 18:25:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.737 18:25:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.737 18:25:59 -- common/autotest_common.sh@10 -- # set +x 00:17:01.737 [2024-11-17 18:25:59.940874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:01.737 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:01.737 Zero copy mechanism will not be used. 00:17:01.737 [2024-11-17 18:25:59.941614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83343 ] 00:17:01.996 [2024-11-17 18:26:00.079861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.996 [2024-11-17 18:26:00.112201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.996 18:26:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.996 18:26:00 -- common/autotest_common.sh@862 -- # return 0 00:17:01.996 18:26:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:01.996 18:26:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:01.996 18:26:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:02.254 18:26:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.254 18:26:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.512 nvme0n1 00:17:02.512 18:26:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:02.512 18:26:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:02.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:02.770 Zero copy mechanism will not be used. 00:17:02.770 Running I/O for 2 seconds... 00:17:04.674 00:17:04.674 Latency(us) 00:17:04.674 [2024-11-17T18:26:02.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.674 [2024-11-17T18:26:02.941Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:04.674 nvme0n1 : 2.00 6770.38 846.30 0.00 0.00 2358.48 1854.37 6434.44 00:17:04.674 [2024-11-17T18:26:02.941Z] =================================================================================================================== 00:17:04.674 [2024-11-17T18:26:02.941Z] Total : 6770.38 846.30 0.00 0.00 2358.48 1854.37 6434.44 00:17:04.674 0 00:17:04.674 18:26:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:04.674 18:26:02 -- host/digest.sh@92 -- # get_accel_stats 00:17:04.674 18:26:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:04.674 18:26:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:04.674 | select(.opcode=="crc32c") 00:17:04.674 | "\(.module_name) \(.executed)"' 00:17:04.674 18:26:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:04.933 18:26:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:04.933 18:26:03 -- host/digest.sh@93 -- # exp_module=software 00:17:04.933 18:26:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:04.933 18:26:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:04.933 18:26:03 -- host/digest.sh@97 -- # killprocess 83343 00:17:04.933 18:26:03 -- common/autotest_common.sh@936 -- # '[' -z 83343 ']' 00:17:04.933 18:26:03 -- common/autotest_common.sh@940 -- # kill -0 83343 00:17:04.933 18:26:03 -- common/autotest_common.sh@941 -- # uname 00:17:04.933 18:26:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:04.933 18:26:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83343 00:17:05.193 killing process with pid 83343 00:17:05.193 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.193 00:17:05.193 Latency(us) 00:17:05.193 [2024-11-17T18:26:03.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.193 [2024-11-17T18:26:03.460Z] =================================================================================================================== 00:17:05.193 [2024-11-17T18:26:03.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.193 18:26:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:05.193 18:26:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:05.193 18:26:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83343' 00:17:05.193 18:26:03 -- common/autotest_common.sh@955 -- # kill 83343 00:17:05.193 18:26:03 -- common/autotest_common.sh@960 -- # wait 83343 00:17:05.193 18:26:03 -- host/digest.sh@126 -- # killprocess 83162 00:17:05.193 18:26:03 -- common/autotest_common.sh@936 -- # '[' -z 83162 ']' 00:17:05.193 18:26:03 -- common/autotest_common.sh@940 -- # kill -0 83162 00:17:05.193 18:26:03 -- common/autotest_common.sh@941 -- # uname 00:17:05.193 18:26:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.193 18:26:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83162 00:17:05.193 killing process with pid 83162 00:17:05.193 18:26:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.193 18:26:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.193 18:26:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83162' 00:17:05.193 18:26:03 -- common/autotest_common.sh@955 -- # kill 83162 00:17:05.193 18:26:03 -- common/autotest_common.sh@960 -- # wait 83162 00:17:05.452 00:17:05.452 real 0m15.191s 00:17:05.452 user 0m28.964s 00:17:05.452 sys 0m4.260s 00:17:05.452 18:26:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:05.452 ************************************ 00:17:05.452 END TEST nvmf_digest_clean 00:17:05.452 ************************************ 00:17:05.452 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 18:26:03 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:05.452 18:26:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:05.452 18:26:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.452 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 ************************************ 00:17:05.452 START TEST nvmf_digest_error 00:17:05.452 ************************************ 00:17:05.452 18:26:03 -- common/autotest_common.sh@1114 -- # run_digest_error 00:17:05.452 18:26:03 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:05.452 18:26:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:05.452 18:26:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.452 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.452 18:26:03 -- nvmf/common.sh@469 -- # nvmfpid=83420 00:17:05.452 18:26:03 -- nvmf/common.sh@470 -- # waitforlisten 83420 00:17:05.452 18:26:03 -- common/autotest_common.sh@829 -- # '[' -z 83420 ']' 00:17:05.452 18:26:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.452 18:26:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:05.452 18:26:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.452 18:26:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.452 18:26:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.452 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.452 [2024-11-17 18:26:03.642439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:05.452 [2024-11-17 18:26:03.642761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.711 [2024-11-17 18:26:03.779471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.711 [2024-11-17 18:26:03.811808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:05.711 [2024-11-17 18:26:03.811961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.711 [2024-11-17 18:26:03.811975] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.711 [2024-11-17 18:26:03.811984] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.711 [2024-11-17 18:26:03.812014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.711 18:26:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.711 18:26:03 -- common/autotest_common.sh@862 -- # return 0 00:17:05.711 18:26:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.711 18:26:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.711 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.711 18:26:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.711 18:26:03 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:05.711 18:26:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.711 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.711 [2024-11-17 18:26:03.920384] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:05.711 18:26:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.711 18:26:03 -- host/digest.sh@104 -- # common_target_config 00:17:05.711 18:26:03 -- host/digest.sh@43 -- # rpc_cmd 00:17:05.711 18:26:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.711 18:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.969 null0 00:17:05.969 [2024-11-17 18:26:03.989774] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.970 [2024-11-17 18:26:04.013893] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.970 18:26:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.970 18:26:04 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:05.970 18:26:04 -- host/digest.sh@54 -- # local rw bs qd 00:17:05.970 18:26:04 -- host/digest.sh@56 -- # rw=randread 00:17:05.970 18:26:04 -- host/digest.sh@56 -- # bs=4096 00:17:05.970 18:26:04 -- host/digest.sh@56 -- # qd=128 00:17:05.970 18:26:04 -- host/digest.sh@58 -- # bperfpid=83439 00:17:05.970 18:26:04 -- host/digest.sh@60 -- # waitforlisten 83439 /var/tmp/bperf.sock 00:17:05.970 18:26:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:05.970 18:26:04 -- common/autotest_common.sh@829 -- # '[' -z 83439 ']' 00:17:05.970 18:26:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.970 18:26:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.970 18:26:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.970 18:26:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.970 18:26:04 -- common/autotest_common.sh@10 -- # set +x 00:17:05.970 [2024-11-17 18:26:04.062552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:05.970 [2024-11-17 18:26:04.062789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83439 ] 00:17:05.970 [2024-11-17 18:26:04.199729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.970 [2024-11-17 18:26:04.232995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.227 18:26:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.227 18:26:04 -- common/autotest_common.sh@862 -- # return 0 00:17:06.227 18:26:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.227 18:26:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.485 18:26:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:06.485 18:26:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.485 18:26:04 -- common/autotest_common.sh@10 -- # set +x 00:17:06.485 18:26:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.485 18:26:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.485 18:26:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.744 nvme0n1 00:17:06.744 18:26:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:06.744 18:26:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.744 18:26:04 -- common/autotest_common.sh@10 -- # set +x 00:17:06.744 18:26:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.744 18:26:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:06.744 18:26:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:07.004 Running I/O for 2 seconds... 00:17:07.004 [2024-11-17 18:26:05.034786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.034887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.034902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.051260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.051328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.051376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.067138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.067386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.067422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.082847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.083064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.083099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.098525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.098756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.098948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.115103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.115315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.115531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.132166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.132382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.132566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.150104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.150354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.150578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.166323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.166571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.166745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.182172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.182408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.182603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.197963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.198169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.198433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.214295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.214360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.214392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.231205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.231428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.231448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.248469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.248525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.248556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.004 [2024-11-17 18:26:05.266177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.004 [2024-11-17 18:26:05.266217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.004 [2024-11-17 18:26:05.266247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.283494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.283531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.283561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.300758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.300813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.300843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.318249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.318327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.318359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.335214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.335456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.335492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.351177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.351375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.351409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.367797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.367835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.367864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.383556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.383592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.383621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.398851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.398903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.398947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.414018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.414055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.414084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.429286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.429352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.429381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.444800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.444990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.264 [2024-11-17 18:26:05.445041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.264 [2024-11-17 18:26:05.460319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.264 [2024-11-17 18:26:05.460356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.265 [2024-11-17 18:26:05.460385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.265 [2024-11-17 18:26:05.477091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.265 [2024-11-17 18:26:05.477132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.265 [2024-11-17 18:26:05.477161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.265 [2024-11-17 18:26:05.492776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.265 [2024-11-17 18:26:05.492812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.265 [2024-11-17 18:26:05.492840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.265 [2024-11-17 18:26:05.508007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.265 [2024-11-17 18:26:05.508043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.265 [2024-11-17 18:26:05.508072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.265 [2024-11-17 18:26:05.523237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.265 [2024-11-17 18:26:05.523469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.265 [2024-11-17 18:26:05.523503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.539968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.540019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.540048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.555210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.555425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.555458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.570488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.570706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.570740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.587200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.587241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.587273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.604176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.604229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.604258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.619991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.620028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.620057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.635987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.636038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.636067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.651925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.651964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.651992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.667966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.668020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.524 [2024-11-17 18:26:05.668050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.524 [2024-11-17 18:26:05.683668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.524 [2024-11-17 18:26:05.683705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.683734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.525 [2024-11-17 18:26:05.699555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.525 [2024-11-17 18:26:05.699591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.699619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.525 [2024-11-17 18:26:05.715218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.525 [2024-11-17 18:26:05.715255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.715284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.525 [2024-11-17 18:26:05.731429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.525 [2024-11-17 18:26:05.731467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.731481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.525 [2024-11-17 18:26:05.747349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.525 [2024-11-17 18:26:05.747404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.747435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.525 [2024-11-17 18:26:05.762614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.525 [2024-11-17 18:26:05.762803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.762839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.525 [2024-11-17 18:26:05.778065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.525 [2024-11-17 18:26:05.778102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.525 [2024-11-17 18:26:05.778132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.794471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.794530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.794562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.809753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.809789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.809818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.824859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.825059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.825093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.840695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.840733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.840762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.856826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.856867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.856897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.873887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.874058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.874108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.889616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.889654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.889698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.904988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.905169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.905202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.920446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.784 [2024-11-17 18:26:05.920481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.784 [2024-11-17 18:26:05.920510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.784 [2024-11-17 18:26:05.935658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:05.935694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:05.935723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.785 [2024-11-17 18:26:05.950768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:05.950972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:05.951005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.785 [2024-11-17 18:26:05.966145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:05.966335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:05.966370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.785 [2024-11-17 18:26:05.982716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:05.982758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:05.982804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.785 [2024-11-17 18:26:05.999306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:05.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:05.999398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.785 [2024-11-17 18:26:06.014555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:06.014736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:06.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.785 [2024-11-17 18:26:06.029837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:07.785 [2024-11-17 18:26:06.029874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.785 [2024-11-17 18:26:06.029903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.044 [2024-11-17 18:26:06.052352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.044 [2024-11-17 18:26:06.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.044 [2024-11-17 18:26:06.052403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.044 [2024-11-17 18:26:06.068867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.044 [2024-11-17 18:26:06.068905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.044 [2024-11-17 18:26:06.068933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.044 [2024-11-17 18:26:06.085511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.044 [2024-11-17 18:26:06.085694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.044 [2024-11-17 18:26:06.085727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.044 [2024-11-17 18:26:06.101378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.044 [2024-11-17 18:26:06.101430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.044 [2024-11-17 18:26:06.101459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.044 [2024-11-17 18:26:06.116856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.117043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.117079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.132030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.132067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.132096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.147926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.147963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.148008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.165693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.165875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.165908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.182288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.182382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.182409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.197688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.197725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.197754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.212592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.212807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.227791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.227857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.242879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.243076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.259458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.259495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.259524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.274380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.274416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.274444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.289308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.289345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.289374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.045 [2024-11-17 18:26:06.304267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.045 [2024-11-17 18:26:06.304310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.045 [2024-11-17 18:26:06.304339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.322909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.322947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.322976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.341219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.341290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.359440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.359477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.359506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.376977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.377048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.377079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.394924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.394962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.394992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.412434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.412471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.412501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.428313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.428349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.428377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.444081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.444135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.444164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.459939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.460111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.460147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.476009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.476047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.476076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.492389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.492427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.492457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.508857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.508894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.508924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.525813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.525881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.542029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.542067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.542098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.305 [2024-11-17 18:26:06.557122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.305 [2024-11-17 18:26:06.557159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.305 [2024-11-17 18:26:06.557187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.573415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.573653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.573698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.589224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.589481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.589645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.606225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.606477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.606733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.623473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.623686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.640583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.640846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.641004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.659097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.659501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.674671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.674897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.675054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.690217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.690456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.690648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.706004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.706208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.706493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.722526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.722582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.722612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.739533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.739575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.739589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.756263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.756328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.756369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.772982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.773021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.773035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.789145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.789183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.789212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.804933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.804971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.805000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-11-17 18:26:06.820713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.565 [2024-11-17 18:26:06.820749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-11-17 18:26:06.820778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.837426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.837464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.837494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.853253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.853333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.853364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.869028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.869065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.869094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.884991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.885217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.885251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.900557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.900760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.900979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.916284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.916517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.916751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.932412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.932604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.932768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.948060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.948265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.948495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.964066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.964257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-11-17 18:26:06.964523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-11-17 18:26:06.980245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.825 [2024-11-17 18:26:06.980468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-11-17 18:26:06.980642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.826 [2024-11-17 18:26:06.996213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.826 [2024-11-17 18:26:06.996447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-11-17 18:26:06.996603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.826 [2024-11-17 18:26:07.013879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2326410) 00:17:08.826 [2024-11-17 18:26:07.014062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-11-17 18:26:07.014102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.826 00:17:08.826 Latency(us) 00:17:08.826 [2024-11-17T18:26:07.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.826 [2024-11-17T18:26:07.093Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:08.826 nvme0n1 : 2.01 15671.45 61.22 0.00 0.00 8161.85 7268.54 31457.28 00:17:08.826 [2024-11-17T18:26:07.093Z] =================================================================================================================== 00:17:08.826 [2024-11-17T18:26:07.093Z] Total : 15671.45 61.22 0.00 0.00 8161.85 7268.54 31457.28 00:17:08.826 0 00:17:08.826 18:26:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:08.826 18:26:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:08.826 18:26:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:08.826 18:26:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:08.826 | .driver_specific 00:17:08.826 | .nvme_error 00:17:08.826 | .status_code 00:17:08.826 | .command_transient_transport_error' 00:17:09.084 18:26:07 -- host/digest.sh@71 -- # (( 123 > 0 )) 00:17:09.084 18:26:07 -- host/digest.sh@73 -- # killprocess 83439 00:17:09.084 18:26:07 -- common/autotest_common.sh@936 -- # '[' -z 83439 ']' 00:17:09.084 18:26:07 -- common/autotest_common.sh@940 -- # kill -0 83439 00:17:09.084 18:26:07 -- common/autotest_common.sh@941 -- # uname 00:17:09.084 18:26:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.084 18:26:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83439 00:17:09.342 killing process with pid 83439 00:17:09.342 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.342 00:17:09.342 Latency(us) 00:17:09.342 [2024-11-17T18:26:07.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.342 [2024-11-17T18:26:07.609Z] =================================================================================================================== 00:17:09.342 [2024-11-17T18:26:07.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.342 18:26:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:09.342 18:26:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:09.343 18:26:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83439' 00:17:09.343 18:26:07 -- common/autotest_common.sh@955 -- # kill 83439 00:17:09.343 18:26:07 -- common/autotest_common.sh@960 -- # wait 83439 00:17:09.343 18:26:07 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:09.343 18:26:07 -- host/digest.sh@54 -- # local rw bs qd 00:17:09.343 18:26:07 -- host/digest.sh@56 -- # rw=randread 00:17:09.343 18:26:07 -- host/digest.sh@56 -- # bs=131072 00:17:09.343 18:26:07 -- host/digest.sh@56 -- # qd=16 00:17:09.343 18:26:07 -- host/digest.sh@58 -- # bperfpid=83492 00:17:09.343 18:26:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:09.343 18:26:07 -- host/digest.sh@60 -- # waitforlisten 83492 /var/tmp/bperf.sock 00:17:09.343 18:26:07 -- common/autotest_common.sh@829 -- # '[' -z 83492 ']' 00:17:09.343 18:26:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.343 18:26:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.343 18:26:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.343 18:26:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.343 18:26:07 -- common/autotest_common.sh@10 -- # set +x 00:17:09.343 [2024-11-17 18:26:07.538549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:09.343 [2024-11-17 18:26:07.538849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83492 ] 00:17:09.343 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.343 Zero copy mechanism will not be used. 00:17:09.601 [2024-11-17 18:26:07.674692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.601 [2024-11-17 18:26:07.706249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.539 18:26:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.539 18:26:08 -- common/autotest_common.sh@862 -- # return 0 00:17:10.539 18:26:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.539 18:26:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.539 18:26:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:10.539 18:26:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.539 18:26:08 -- common/autotest_common.sh@10 -- # set +x 00:17:10.539 18:26:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.539 18:26:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.539 18:26:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:11.110 nvme0n1 00:17:11.110 18:26:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:11.110 18:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.110 18:26:09 -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 18:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.110 18:26:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:11.110 18:26:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:11.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:11.110 Zero copy mechanism will not be used. 00:17:11.110 Running I/O for 2 seconds... 00:17:11.110 [2024-11-17 18:26:09.233041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.233303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.233325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.237623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.237674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.237702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.241789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.241825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.241853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.245851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.245888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.245916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.249868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.249906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.249935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.253952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.253988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.254018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.257958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.258022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.262004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.262041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.262069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.266053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.266089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.266118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.270130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.270166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.270194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.274154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.274191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.274219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.278233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.278270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.278328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.282408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.282444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.282473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.286463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.286522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.286552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.290357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.290392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.290420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.294340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.294390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.294419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.298379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.298414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.298441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.302314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.302349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.302377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.306275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.306338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.306367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.310442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.310478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.310530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.314325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.314360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.314387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.318355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.318390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.318418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.322370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.322405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.322433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.326428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.326463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.326491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.330405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.330441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.330468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.334361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.334396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.334424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.338399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.342365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.342399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.342427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.346315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.346351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.346379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.350268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.350333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.350362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.354324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.354359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.354387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.358282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.358326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.358355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.362607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.362648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.362663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.366893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.366943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.366971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.110 [2024-11-17 18:26:09.371655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.110 [2024-11-17 18:26:09.371710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.110 [2024-11-17 18:26:09.371740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.369 [2024-11-17 18:26:09.376404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.376457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.376487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.380909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.380947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.380976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.385328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.385377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.390033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.390072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.390101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.394652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.394693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.394708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.399087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.399122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.399150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.403593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.403675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.408316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.408371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.408387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.412893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.412931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.412960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.417632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.417670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.417715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.422237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.422287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.422302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.426447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.426482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.426534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.430700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.430740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.430754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.434841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.434894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.434922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.439006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.439042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.443069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.443104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.443132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.447245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.447307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.447338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.451294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.451356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.451387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.455987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.456023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.456051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.460144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.460180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.460208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.464251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.464332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.464361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.468270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.468332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.468360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.472328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.472363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.472391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.476351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.476387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.476415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.480253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.480364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.484331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.370 [2024-11-17 18:26:09.484365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.370 [2024-11-17 18:26:09.484394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.370 [2024-11-17 18:26:09.488352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.488387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.492391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.492427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.492456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.496444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.496480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.496508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.500541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.500577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.500606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.504538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.504573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.504602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.508542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.508578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.508607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.512630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.512666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.516780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.516816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.516844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.520960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.520996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.521024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.525094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.525130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.525158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.529212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.529248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.529276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.533272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.533338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.533367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.537387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.537422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.537450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.541311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.541345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.541374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.545235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.545461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.545494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.549526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.549563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.549590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.553547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.553612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.557539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.557575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.557603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.561546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.561582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.561610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.565468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.565503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.565531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.569344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.569379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.569407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.573351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.573386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.573414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.577257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.577478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.577512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.581563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.581599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.581627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.585494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.585530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.585558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.589450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.589486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.589513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.593475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.371 [2024-11-17 18:26:09.593511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.371 [2024-11-17 18:26:09.593539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.371 [2024-11-17 18:26:09.597613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.597651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.597680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.601647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.601684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.601712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.605756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.605793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.605821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.610051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.610090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.610120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.614735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.614776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.614790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.619240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.619290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.619305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.623742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.623778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.623806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.628493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.628531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.628560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.372 [2024-11-17 18:26:09.633308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.372 [2024-11-17 18:26:09.633374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.372 [2024-11-17 18:26:09.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.635 [2024-11-17 18:26:09.637753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.635 [2024-11-17 18:26:09.637790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.642072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.642110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.642138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.646131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.646168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.646180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.650195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.650231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.650260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.654178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.654215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.654243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.658195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.658230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.658258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.662357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.662422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.666292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.666327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.666355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.670166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.670202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.670230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.674237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.674302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.674333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.678375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.678411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.678439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.682340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.682375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.682402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.686311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.686345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.686373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.690309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.690343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.690371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.694233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.694270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.694328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.698279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.698338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.698352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.702246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.702306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.702319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.706234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.706270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.706328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.710294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.710328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.710356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.714352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.714387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.714415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.718306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.718341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.718368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.722348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.722383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.722411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.726314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.726348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.726376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.730210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.730245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.730274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.734213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.734249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.734277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.738235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.738297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.738311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.742194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.742230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.742259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.746151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.746215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.750157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.636 [2024-11-17 18:26:09.750193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.636 [2024-11-17 18:26:09.750222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.636 [2024-11-17 18:26:09.754272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.754317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.754345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.758218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.758255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.758284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.762158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.762194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.762222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.766258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.766323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.766351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.770237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.770303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.770333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.774346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.774381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.774409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.778339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.778374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.778402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.782321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.782355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.782383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.786296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.786330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.786359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.790283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.790316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.790343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.794138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.794174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.794202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.798272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.798335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.798366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.802228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.802264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.802302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.806246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.806308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.806338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.810667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.810716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.810730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.815359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.815470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.815499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.820032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.820260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.820298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.824893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.824930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.824958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.829626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.829694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.829722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.834255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.834320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.834336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.838965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.839031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.839046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.843696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.843732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.843761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.848430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.848486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.848500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.852932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.852968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.853013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.857749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.857918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.857952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.862712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.862755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.637 [2024-11-17 18:26:09.862770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.637 [2024-11-17 18:26:09.867540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.637 [2024-11-17 18:26:09.867592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.867605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.638 [2024-11-17 18:26:09.872210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.638 [2024-11-17 18:26:09.872253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.872268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.638 [2024-11-17 18:26:09.876799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.638 [2024-11-17 18:26:09.876980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.877031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.638 [2024-11-17 18:26:09.881590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.638 [2024-11-17 18:26:09.881628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.881658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.638 [2024-11-17 18:26:09.886385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.638 [2024-11-17 18:26:09.886434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.886463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.638 [2024-11-17 18:26:09.891171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.638 [2024-11-17 18:26:09.891213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.638 [2024-11-17 18:26:09.895833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.638 [2024-11-17 18:26:09.896058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.638 [2024-11-17 18:26:09.896078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.900673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.900715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.900729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.905137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.905180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.905195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.909588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.909631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.909645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.914135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.914177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.914192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.918638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.918678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.918692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.923039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.923076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.923104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.927365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.927400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.927428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.931601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.931636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.931649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.935970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.936007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.936036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.940189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.940253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.944402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.944439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.948580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.948617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.948646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.952637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.952673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.952702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.956721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.956758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.956787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.935 [2024-11-17 18:26:09.961050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.935 [2024-11-17 18:26:09.961087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.935 [2024-11-17 18:26:09.961116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.965203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.965240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.965269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.969332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.969367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.969395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.973696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.973733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.973761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.977697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.977734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.977762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.981732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.981769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.981798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.985987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.986049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.986061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.990243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.990326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.990359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.994636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.994677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.994692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:09.998904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:09.998954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:09.998982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.003377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.003446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.003475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.007836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.007876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.007905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.012226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.012267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.012330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.016569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.016607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.016636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.021227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.021267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.021292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.025548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.025585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.025614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.029751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.029789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.029817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.033894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.033931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.033959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.038124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.038163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.038193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.042212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.042249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.046364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.046400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.046428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.050429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.050465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.050493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.054592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.054634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.054648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.058563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.058604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.058618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.062602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.062643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.062656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.066627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.066667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.066696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.070723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.070763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.936 [2024-11-17 18:26:10.070792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.936 [2024-11-17 18:26:10.074812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.936 [2024-11-17 18:26:10.074894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.074922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.079069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.079109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.079137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.083081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.083122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.083151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.087276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.087355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.087401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.091361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.091441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.091471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.095507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.095543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.099545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.099584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.099597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.103774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.103813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.103827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.108401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.108451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.108480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.112836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.112902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.117271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.117334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.117364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.121610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.121646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.121674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.125706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.125742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.125769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.129786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.129821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.129849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.133781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.133816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.133846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.137974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.138010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.138038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.141995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.142031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.142059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.146562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.146602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.146616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.151256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.151457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.151477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.156107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.160827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.160866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.160879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.165046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.165085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.165099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.169549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.169592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.169606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.174157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.174213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.174243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.178617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.178794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.178814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.183348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.183415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.183445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.188109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.937 [2024-11-17 18:26:10.188151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.937 [2024-11-17 18:26:10.188182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.937 [2024-11-17 18:26:10.192727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:11.938 [2024-11-17 18:26:10.192763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.938 [2024-11-17 18:26:10.192792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.197409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.197494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.197541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.201941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.202123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.202143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.206544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.206587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.206601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.211074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.211113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.211144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.215415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.215453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.215482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.220040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.220080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.220111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.224719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.224757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.224785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.229465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.229502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.229531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.234118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.234160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.234174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.238725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.238767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.238781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.243488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.243525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.243553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.247939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.247976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.248021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.252511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.252546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.252574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.256592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.256631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.256660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.260840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.260877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.212 [2024-11-17 18:26:10.260905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.212 [2024-11-17 18:26:10.265034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.212 [2024-11-17 18:26:10.265071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.265099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.269374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.269428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.269472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.273597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.273633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.273662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.277633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.277668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.281894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.281932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.281961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.285958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.285994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.286023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.290002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.290040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.290069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.294280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.294345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.294376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.298318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.298354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.298382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.302282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.302328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.302357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.306478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.306540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.306554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.310462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.310520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.310549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.314492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.314554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.314583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.318705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.318759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.318788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.323034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.323074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.323103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.327505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.327542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.327570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.331706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.331742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.331770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.335779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.335816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.335845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.339966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.340002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.340030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.344001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.344036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.344064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.348018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.348054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.348082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.352680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.352717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.352745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.357210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.357250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.357281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.361731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.361766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.361794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.366091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.366128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.366157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.370271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.370381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.374678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.213 [2024-11-17 18:26:10.374719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.213 [2024-11-17 18:26:10.374733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.213 [2024-11-17 18:26:10.379036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.379209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.379243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.383380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.383417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.383445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.387331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.387393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.387421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.391219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.391438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.391473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.395826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.395863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.395891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.400209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.400247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.400275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.404767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.404805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.404818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.409108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.409146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.409159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.413722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.413758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.417984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.418020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.418049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.422445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.422482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.422503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.427364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.427416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.427432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.432047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.432085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.432115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.436173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.436209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.436237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.440335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.440371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.440400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.444436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.444471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.444499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.448491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.448526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.448555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.452484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.452519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.452548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.456559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.456594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.456622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.460578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.460614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.460642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.464461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.464496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.464525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.468309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.468343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.468371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.214 [2024-11-17 18:26:10.472442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.214 [2024-11-17 18:26:10.472479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.214 [2024-11-17 18:26:10.472508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.475 [2024-11-17 18:26:10.476950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.475 [2024-11-17 18:26:10.477004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.475 [2024-11-17 18:26:10.477037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.475 [2024-11-17 18:26:10.481181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.475 [2024-11-17 18:26:10.481220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.475 [2024-11-17 18:26:10.481249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.475 [2024-11-17 18:26:10.485526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.475 [2024-11-17 18:26:10.485562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.475 [2024-11-17 18:26:10.485590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.475 [2024-11-17 18:26:10.489490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.475 [2024-11-17 18:26:10.489526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.475 [2024-11-17 18:26:10.489554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.475 [2024-11-17 18:26:10.493491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.475 [2024-11-17 18:26:10.493525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.475 [2024-11-17 18:26:10.493553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.475 [2024-11-17 18:26:10.497682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.475 [2024-11-17 18:26:10.497719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.497747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.501788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.501824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.501853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.505837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.505873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.505901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.509815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.509851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.509880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.513916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.513951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.513980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.518083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.518148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.522131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.522166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.522194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.526185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.526222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.526250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.530102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.530137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.530165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.534033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.534070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.534098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.538124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.538159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.538188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.542216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.542252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.542281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.546165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.546202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.546230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.550180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.550216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.550244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.554679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.554721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.554735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.559200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.559239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.559270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.563764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.563800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.563830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.568542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.568576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.568605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.573063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.573119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.573149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.577489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.577524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.577552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.581895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.581942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.581970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.586468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.586525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.586540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.590885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.590919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.590947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.595652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.595703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.595730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.600273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.600329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.600364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.604921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.604957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.604986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.609645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.476 [2024-11-17 18:26:10.609682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.476 [2024-11-17 18:26:10.609710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.476 [2024-11-17 18:26:10.614118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.614159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.614174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.618823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.619063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.619082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.623864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.623901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.623929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.628568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.628604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.628632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.633124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.633167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.633181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.637748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.637801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.637829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.642266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.642556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.642575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.646993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.647051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.647082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.651259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.651335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.651365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.655066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.655102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.655129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.658982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.659018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.659045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.662888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.662923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.662952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.666840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.666876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.666904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.670728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.670765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.670794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.674628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.674665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.674693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.678631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.678667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.678696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.682477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.682539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.682568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.686930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.686968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.686980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.691478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.691513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.691541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.695304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.695369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.695399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.699196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.699233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.699272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.703137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.703173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.703202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.707007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.707044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.707072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.710909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.710944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.710971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.714756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.714795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.714807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.718699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.718736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.718765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.477 [2024-11-17 18:26:10.722707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.477 [2024-11-17 18:26:10.722745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.477 [2024-11-17 18:26:10.722758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.478 [2024-11-17 18:26:10.726643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.478 [2024-11-17 18:26:10.726680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.478 [2024-11-17 18:26:10.726692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.478 [2024-11-17 18:26:10.730437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.478 [2024-11-17 18:26:10.730472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.478 [2024-11-17 18:26:10.730522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.478 [2024-11-17 18:26:10.734244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.478 [2024-11-17 18:26:10.734447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.478 [2024-11-17 18:26:10.734480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.478 [2024-11-17 18:26:10.739122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.478 [2024-11-17 18:26:10.739175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.478 [2024-11-17 18:26:10.739203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.738 [2024-11-17 18:26:10.743443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.738 [2024-11-17 18:26:10.743479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.738 [2024-11-17 18:26:10.743508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.738 [2024-11-17 18:26:10.747662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.738 [2024-11-17 18:26:10.747712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.738 [2024-11-17 18:26:10.747741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.738 [2024-11-17 18:26:10.751540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.751576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.751605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.755453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.755489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.755517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.759220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.759256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.759314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.763189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.763225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.763253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.767091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.767127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.767155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.771022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.771058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.771086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.775052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.775089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.775116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.779094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.779130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.779159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.783031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.783067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.783095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.787018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.787053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.787082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.790949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.790985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.791014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.794931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.794995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.798991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.799028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.799057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.802906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.802942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.802971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.806862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.806897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.806925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.810984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.811020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.811048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.814929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.814964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.818879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.818915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.818958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.822988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.823023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.823051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.827029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.827066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.827094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.831050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.831084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.831113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.835038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.835073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.835101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.839094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.839158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.843126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.843162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.843189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.847159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.847195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.847223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.851140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.739 [2024-11-17 18:26:10.851176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.739 [2024-11-17 18:26:10.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.739 [2024-11-17 18:26:10.855166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.855202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.855229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.859158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.859194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.859222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.863163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.863198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.863226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.867109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.867145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.867174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.871169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.871205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.875273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.875353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.875399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.879197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.879233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.879261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.883133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.883169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.883198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.887188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.887251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.891138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.891174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.891202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.895209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.895245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.895273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.899179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.899218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.899246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.903697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.903747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.903775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.908222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.908261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.908311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.912729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.912765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.912792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.916977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.917184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.917218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.921510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.921545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.921574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.925464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.925499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.925527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.929462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.929497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.929525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.933409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.933444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.933472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.937467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.937502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.937530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.941497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.941532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.941560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.945920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.945957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.945985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.950557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.950608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.950622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.954436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.954471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.954526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.958418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.958451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.958479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.740 [2024-11-17 18:26:10.962421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.740 [2024-11-17 18:26:10.962455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.740 [2024-11-17 18:26:10.962483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.966412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.966447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.966475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.970420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.970455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.970482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.974375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.974421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.974449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.978335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.978370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.978398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.982216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.982251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.982279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.986270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.986316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.986345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.990343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.990378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.990407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.994776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.994829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.994857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.741 [2024-11-17 18:26:10.999221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:12.741 [2024-11-17 18:26:10.999263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.741 [2024-11-17 18:26:10.999290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.001 [2024-11-17 18:26:11.003838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.001 [2024-11-17 18:26:11.003891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.001 [2024-11-17 18:26:11.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.001 [2024-11-17 18:26:11.008606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.001 [2024-11-17 18:26:11.008689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.001 [2024-11-17 18:26:11.008717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.001 [2024-11-17 18:26:11.013102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.001 [2024-11-17 18:26:11.013139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.001 [2024-11-17 18:26:11.013167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.001 [2024-11-17 18:26:11.017502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.001 [2024-11-17 18:26:11.017539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.001 [2024-11-17 18:26:11.017568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.001 [2024-11-17 18:26:11.021717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.001 [2024-11-17 18:26:11.021753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.001 [2024-11-17 18:26:11.021782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.001 [2024-11-17 18:26:11.026089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.001 [2024-11-17 18:26:11.026136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.001 [2024-11-17 18:26:11.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.030532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.030573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.030588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.034640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.034694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.034724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.038731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.038773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.038787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.042933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.042969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.043008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.047039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.047077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.047105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.051163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.051200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.051229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.055553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.055590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.055619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.059681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.059733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.059762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.063837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.063874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.063901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.068117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.068153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.068182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.072373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.072409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.072437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.076465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.076501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.076530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.080785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.080821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.080849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.084857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.084893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.084921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.088969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.089006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.089034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.093504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.093541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.093570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.098081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.098121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.098151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.102446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.102483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.102536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.106871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.106924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.106953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.111162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.111200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.111231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.115442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.115480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.119817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.119855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.119884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.124102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.124140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.124168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.128400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.128437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.128466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.132700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.132735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.132764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.136868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.136903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.002 [2024-11-17 18:26:11.136931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.002 [2024-11-17 18:26:11.140852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.002 [2024-11-17 18:26:11.140899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.140927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.144951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.144991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.145020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.149252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.149320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.149333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.153501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.153537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.153549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.157530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.157568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.157581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.161887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.161939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.161969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.166749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.166790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.166805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.171121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.171158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.171187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.175447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.175510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.179658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.179708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.179737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.183821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.183856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.183884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.187942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.187977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.188006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.192092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.192130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.192158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.196069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.196105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.196133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.200258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.200321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.200351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.204752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.204977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.205011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.209652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.209690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.209703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.213637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.213673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.213701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.217621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.217657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.217685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.003 [2024-11-17 18:26:11.221506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20a95b0) 00:17:13.003 [2024-11-17 18:26:11.221541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.003 [2024-11-17 18:26:11.221569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.003 00:17:13.003 Latency(us) 00:17:13.003 [2024-11-17T18:26:11.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.003 [2024-11-17T18:26:11.270Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:13.003 nvme0n1 : 2.00 7309.44 913.68 0.00 0.00 2185.79 1705.43 11200.70 00:17:13.003 [2024-11-17T18:26:11.270Z] =================================================================================================================== 00:17:13.003 [2024-11-17T18:26:11.270Z] Total : 7309.44 913.68 0.00 0.00 2185.79 1705.43 11200.70 00:17:13.003 0 00:17:13.003 18:26:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:13.003 18:26:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:13.003 18:26:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:13.003 18:26:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:13.003 | .driver_specific 00:17:13.003 | .nvme_error 00:17:13.003 | .status_code 00:17:13.003 | .command_transient_transport_error' 00:17:13.263 18:26:11 -- host/digest.sh@71 -- # (( 472 > 0 )) 00:17:13.263 18:26:11 -- host/digest.sh@73 -- # killprocess 83492 00:17:13.263 18:26:11 -- common/autotest_common.sh@936 -- # '[' -z 83492 ']' 00:17:13.263 18:26:11 -- common/autotest_common.sh@940 -- # kill -0 83492 00:17:13.263 18:26:11 -- common/autotest_common.sh@941 -- # uname 00:17:13.263 18:26:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.263 18:26:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83492 00:17:13.522 killing process with pid 83492 00:17:13.522 Received shutdown signal, test time was about 2.000000 seconds 00:17:13.522 00:17:13.522 Latency(us) 00:17:13.522 [2024-11-17T18:26:11.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.522 [2024-11-17T18:26:11.789Z] =================================================================================================================== 00:17:13.522 [2024-11-17T18:26:11.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.522 18:26:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.522 18:26:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.522 18:26:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83492' 00:17:13.522 18:26:11 -- common/autotest_common.sh@955 -- # kill 83492 00:17:13.522 18:26:11 -- common/autotest_common.sh@960 -- # wait 83492 00:17:13.522 18:26:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:13.522 18:26:11 -- host/digest.sh@54 -- # local rw bs qd 00:17:13.522 18:26:11 -- host/digest.sh@56 -- # rw=randwrite 00:17:13.522 18:26:11 -- host/digest.sh@56 -- # bs=4096 00:17:13.522 18:26:11 -- host/digest.sh@56 -- # qd=128 00:17:13.522 18:26:11 -- host/digest.sh@58 -- # bperfpid=83552 00:17:13.522 18:26:11 -- host/digest.sh@60 -- # waitforlisten 83552 /var/tmp/bperf.sock 00:17:13.522 18:26:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:13.522 18:26:11 -- common/autotest_common.sh@829 -- # '[' -z 83552 ']' 00:17:13.522 18:26:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:13.522 18:26:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.522 18:26:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:13.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:13.522 18:26:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.522 18:26:11 -- common/autotest_common.sh@10 -- # set +x 00:17:13.522 [2024-11-17 18:26:11.734566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:13.522 [2024-11-17 18:26:11.734866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83552 ] 00:17:13.780 [2024-11-17 18:26:11.866774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.780 [2024-11-17 18:26:11.899879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.718 18:26:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.718 18:26:12 -- common/autotest_common.sh@862 -- # return 0 00:17:14.718 18:26:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:14.718 18:26:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:14.718 18:26:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:14.718 18:26:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.718 18:26:12 -- common/autotest_common.sh@10 -- # set +x 00:17:14.718 18:26:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.718 18:26:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.718 18:26:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.977 nvme0n1 00:17:15.236 18:26:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:15.236 18:26:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.236 18:26:13 -- common/autotest_common.sh@10 -- # set +x 00:17:15.236 18:26:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.236 18:26:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:15.236 18:26:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:15.236 Running I/O for 2 seconds... 00:17:15.236 [2024-11-17 18:26:13.381497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ddc00 00:17:15.236 [2024-11-17 18:26:13.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.383175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.396765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fef90 00:17:15.236 [2024-11-17 18:26:13.398176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.411604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ff3c8 00:17:15.236 [2024-11-17 18:26:13.412862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.412898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.426071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190feb58 00:17:15.236 [2024-11-17 18:26:13.427450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.427486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.440874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fe720 00:17:15.236 [2024-11-17 18:26:13.442215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.442251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.455879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fe2e8 00:17:15.236 [2024-11-17 18:26:13.457254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.457333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.472095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fdeb0 00:17:15.236 [2024-11-17 18:26:13.473486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.473707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:15.236 [2024-11-17 18:26:13.487945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fda78 00:17:15.236 [2024-11-17 18:26:13.489239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.236 [2024-11-17 18:26:13.489466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.503572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fd640 00:17:15.496 [2024-11-17 18:26:13.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.505373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.518979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fd208 00:17:15.496 [2024-11-17 18:26:13.520298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.520360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.533406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fcdd0 00:17:15.496 [2024-11-17 18:26:13.534921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.534958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.548593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fc998 00:17:15.496 [2024-11-17 18:26:13.549813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.549847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.563010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fc560 00:17:15.496 [2024-11-17 18:26:13.564307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.564364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.577400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fc128 00:17:15.496 [2024-11-17 18:26:13.578940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.578994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.594371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fbcf0 00:17:15.496 [2024-11-17 18:26:13.595701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.595735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.608761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fb8b8 00:17:15.496 [2024-11-17 18:26:13.609915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.609949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:15.496 [2024-11-17 18:26:13.622903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fb480 00:17:15.496 [2024-11-17 18:26:13.624010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.496 [2024-11-17 18:26:13.624046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.637764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fb048 00:17:15.497 [2024-11-17 18:26:13.639084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.639123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.653867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fac10 00:17:15.497 [2024-11-17 18:26:13.655160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.655198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.669616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fa7d8 00:17:15.497 [2024-11-17 18:26:13.670747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.684795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190fa3a0 00:17:15.497 [2024-11-17 18:26:13.686042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.686229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.699854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f9f68 00:17:15.497 [2024-11-17 18:26:13.701167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.701386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.715033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f9b30 00:17:15.497 [2024-11-17 18:26:13.716426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.716602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.730081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f96f8 00:17:15.497 [2024-11-17 18:26:13.731423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.731611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.745135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f92c0 00:17:15.497 [2024-11-17 18:26:13.746419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.497 [2024-11-17 18:26:13.746622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:15.497 [2024-11-17 18:26:13.760707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f8e88 00:17:15.756 [2024-11-17 18:26:13.762130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.762385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.776514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f8a50 00:17:15.756 [2024-11-17 18:26:13.777870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.778079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.791525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f8618 00:17:15.756 [2024-11-17 18:26:13.792673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.792833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.805840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f81e0 00:17:15.756 [2024-11-17 18:26:13.807085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.807116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.820237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f7da8 00:17:15.756 [2024-11-17 18:26:13.821387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.821428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.834627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f7970 00:17:15.756 [2024-11-17 18:26:13.835851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.835888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.851130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f7538 00:17:15.756 [2024-11-17 18:26:13.852221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.852256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.865569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f7100 00:17:15.756 [2024-11-17 18:26:13.866549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.866710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.880158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f6cc8 00:17:15.756 [2024-11-17 18:26:13.881218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.881255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.894309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f6890 00:17:15.756 [2024-11-17 18:26:13.895324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.895530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.908525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f6458 00:17:15.756 [2024-11-17 18:26:13.909617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.909800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:15.756 [2024-11-17 18:26:13.922806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f6020 00:17:15.756 [2024-11-17 18:26:13.923989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.756 [2024-11-17 18:26:13.924174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:15.757 [2024-11-17 18:26:13.937538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f5be8 00:17:15.757 [2024-11-17 18:26:13.938629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.757 [2024-11-17 18:26:13.938804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:15.757 [2024-11-17 18:26:13.951784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f57b0 00:17:15.757 [2024-11-17 18:26:13.952845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.757 [2024-11-17 18:26:13.953029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:15.757 [2024-11-17 18:26:13.965965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f5378 00:17:15.757 [2024-11-17 18:26:13.967115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.757 [2024-11-17 18:26:13.967332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:15.757 [2024-11-17 18:26:13.980326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f4f40 00:17:15.757 [2024-11-17 18:26:13.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.757 [2024-11-17 18:26:13.981590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:15.757 [2024-11-17 18:26:13.994653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f4b08 00:17:15.757 [2024-11-17 18:26:13.995720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.757 [2024-11-17 18:26:13.995902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:15.757 [2024-11-17 18:26:14.008881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f46d0 00:17:15.757 [2024-11-17 18:26:14.009904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.757 [2024-11-17 18:26:14.010089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:16.016 [2024-11-17 18:26:14.024895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f4298 00:17:16.016 [2024-11-17 18:26:14.026169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.016 [2024-11-17 18:26:14.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:16.016 [2024-11-17 18:26:14.040612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f3e60 00:17:16.016 [2024-11-17 18:26:14.041829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.016 [2024-11-17 18:26:14.041869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:16.016 [2024-11-17 18:26:14.055993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f3a28 00:17:16.016 [2024-11-17 18:26:14.056896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.016 [2024-11-17 18:26:14.056932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:16.016 [2024-11-17 18:26:14.070846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f35f0 00:17:16.016 [2024-11-17 18:26:14.071946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.016 [2024-11-17 18:26:14.071977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:16.016 [2024-11-17 18:26:14.085355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f31b8 00:17:16.016 [2024-11-17 18:26:14.086321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.016 [2024-11-17 18:26:14.086394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:16.016 [2024-11-17 18:26:14.099952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f2d80 00:17:16.016 [2024-11-17 18:26:14.100885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.016 [2024-11-17 18:26:14.100922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.115579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f2948 00:17:16.017 [2024-11-17 18:26:14.116454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.116489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.129789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f2510 00:17:16.017 [2024-11-17 18:26:14.130627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.130680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.144012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f20d8 00:17:16.017 [2024-11-17 18:26:14.144857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.144893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.158430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f1ca0 00:17:16.017 [2024-11-17 18:26:14.159266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.159327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.172824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f1868 00:17:16.017 [2024-11-17 18:26:14.173613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.173651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.187013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f1430 00:17:16.017 [2024-11-17 18:26:14.187857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.187907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.201256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f0ff8 00:17:16.017 [2024-11-17 18:26:14.202139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.202175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.215641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f0bc0 00:17:16.017 [2024-11-17 18:26:14.216435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.216485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.230042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f0788 00:17:16.017 [2024-11-17 18:26:14.230953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.231140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.245141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190f0350 00:17:16.017 [2024-11-17 18:26:14.246151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.246378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.260404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eff18 00:17:16.017 [2024-11-17 18:26:14.261279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.261526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:16.017 [2024-11-17 18:26:14.275089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190efae0 00:17:16.017 [2024-11-17 18:26:14.275999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.017 [2024-11-17 18:26:14.276204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:16.276 [2024-11-17 18:26:14.290819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ef6a8 00:17:16.277 [2024-11-17 18:26:14.291682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.291882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.306614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ef270 00:17:16.277 [2024-11-17 18:26:14.307563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.307775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.323221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eee38 00:17:16.277 [2024-11-17 18:26:14.324204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.324420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.338729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eea00 00:17:16.277 [2024-11-17 18:26:14.339580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.339729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.353060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ee5c8 00:17:16.277 [2024-11-17 18:26:14.353871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.354089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.368645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ee190 00:17:16.277 [2024-11-17 18:26:14.369444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.369643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.383077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190edd58 00:17:16.277 [2024-11-17 18:26:14.383856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.384037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.397653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ed920 00:17:16.277 [2024-11-17 18:26:14.398541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.398725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.411986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ed4e8 00:17:16.277 [2024-11-17 18:26:14.412768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.412966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.426262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ed0b0 00:17:16.277 [2024-11-17 18:26:14.427192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.427371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.441230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ecc78 00:17:16.277 [2024-11-17 18:26:14.442067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.442299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.456651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ec840 00:17:16.277 [2024-11-17 18:26:14.457478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.457710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.472527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ec408 00:17:16.277 [2024-11-17 18:26:14.473375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.473437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.489274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ebfd0 00:17:16.277 [2024-11-17 18:26:14.489911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.489940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.505704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ebb98 00:17:16.277 [2024-11-17 18:26:14.506277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.506339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.520840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eb760 00:17:16.277 [2024-11-17 18:26:14.521428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.521472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:16.277 [2024-11-17 18:26:14.535539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eb328 00:17:16.277 [2024-11-17 18:26:14.536079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.277 [2024-11-17 18:26:14.536117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.551646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eaef0 00:17:16.538 [2024-11-17 18:26:14.552208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.552247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.566311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190eaab8 00:17:16.538 [2024-11-17 18:26:14.566942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.567148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.581136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ea680 00:17:16.538 [2024-11-17 18:26:14.581784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.581822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.595997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190ea248 00:17:16.538 [2024-11-17 18:26:14.596522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.596560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.610578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e9e10 00:17:16.538 [2024-11-17 18:26:14.611233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.611265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.626199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e99d8 00:17:16.538 [2024-11-17 18:26:14.626784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.626979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.641131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e95a0 00:17:16.538 [2024-11-17 18:26:14.641841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.642051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.655790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e9168 00:17:16.538 [2024-11-17 18:26:14.656447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.656643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.670091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e8d30 00:17:16.538 [2024-11-17 18:26:14.670787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.671021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.684760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e88f8 00:17:16.538 [2024-11-17 18:26:14.685397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.685583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.699089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e84c0 00:17:16.538 [2024-11-17 18:26:14.699862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.700146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.713847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e8088 00:17:16.538 [2024-11-17 18:26:14.714465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.714699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.729091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e7c50 00:17:16.538 [2024-11-17 18:26:14.729698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.729862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.743836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e7818 00:17:16.538 [2024-11-17 18:26:14.744226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.744252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.757913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e73e0 00:17:16.538 [2024-11-17 18:26:14.758310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.758348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.772029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e6fa8 00:17:16.538 [2024-11-17 18:26:14.772436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.772463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:16.538 [2024-11-17 18:26:14.786752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e6b70 00:17:16.538 [2024-11-17 18:26:14.787391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.538 [2024-11-17 18:26:14.787427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.804043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e6738 00:17:16.799 [2024-11-17 18:26:14.804545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.804599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.819948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e6300 00:17:16.799 [2024-11-17 18:26:14.820319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.820346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.835035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e5ec8 00:17:16.799 [2024-11-17 18:26:14.835441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.835469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.849769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e5a90 00:17:16.799 [2024-11-17 18:26:14.850127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.850154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.864598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e5658 00:17:16.799 [2024-11-17 18:26:14.864930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.864956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.879572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e5220 00:17:16.799 [2024-11-17 18:26:14.879932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.879976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.895787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e4de8 00:17:16.799 [2024-11-17 18:26:14.896108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.911066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e49b0 00:17:16.799 [2024-11-17 18:26:14.911476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.911509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.926522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e4578 00:17:16.799 [2024-11-17 18:26:14.927039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.927072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.942486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e4140 00:17:16.799 [2024-11-17 18:26:14.942839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.942866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.956873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e3d08 00:17:16.799 [2024-11-17 18:26:14.957166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.957192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.971029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e38d0 00:17:16.799 [2024-11-17 18:26:14.971290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.971312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.984998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e3498 00:17:16.799 [2024-11-17 18:26:14.985253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.985287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:14.999271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e3060 00:17:16.799 [2024-11-17 18:26:14.999588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:14.999629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:15.013524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e2c28 00:17:16.799 [2024-11-17 18:26:15.013923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:15.013956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:15.027907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e27f0 00:17:16.799 [2024-11-17 18:26:15.028135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:15.028156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:15.042049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e23b8 00:17:16.799 [2024-11-17 18:26:15.042283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:15.042303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:16.799 [2024-11-17 18:26:15.056201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e1f80 00:17:16.799 [2024-11-17 18:26:15.056463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:16.799 [2024-11-17 18:26:15.056486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.072124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e1b48 00:17:17.060 [2024-11-17 18:26:15.072354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.072376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.086268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e1710 00:17:17.060 [2024-11-17 18:26:15.086533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.086556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.101063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e12d8 00:17:17.060 [2024-11-17 18:26:15.101247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.101269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.115622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e0ea0 00:17:17.060 [2024-11-17 18:26:15.115809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.115830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.129760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e0a68 00:17:17.060 [2024-11-17 18:26:15.129928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.129949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.144855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e0630 00:17:17.060 [2024-11-17 18:26:15.145062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.145085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.159545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190e01f8 00:17:17.060 [2024-11-17 18:26:15.159690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.159710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.173628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190dfdc0 00:17:17.060 [2024-11-17 18:26:15.173767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.173789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.188628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190df988 00:17:17.060 [2024-11-17 18:26:15.188771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.188792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.203000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190df550 00:17:17.060 [2024-11-17 18:26:15.203272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.203294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.217487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190df118 00:17:17.060 [2024-11-17 18:26:15.217721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.217744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.231911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190dece0 00:17:17.060 [2024-11-17 18:26:15.232015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.232036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.246027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190de8a8 00:17:17.060 [2024-11-17 18:26:15.246119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.246140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.260226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190de038 00:17:17.060 [2024-11-17 18:26:15.260342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.060 [2024-11-17 18:26:15.260364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:17.060 [2024-11-17 18:26:15.280035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190de038 00:17:17.061 [2024-11-17 18:26:15.281272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.061 [2024-11-17 18:26:15.281347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.061 [2024-11-17 18:26:15.294449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190de470 00:17:17.061 [2024-11-17 18:26:15.295915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.061 [2024-11-17 18:26:15.295946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.061 [2024-11-17 18:26:15.308842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190de8a8 00:17:17.061 [2024-11-17 18:26:15.310072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.061 [2024-11-17 18:26:15.310107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:17.320 [2024-11-17 18:26:15.324898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190dece0 00:17:17.320 [2024-11-17 18:26:15.326578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.320 [2024-11-17 18:26:15.326612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:17.320 [2024-11-17 18:26:15.341827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190df118 00:17:17.320 [2024-11-17 18:26:15.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.320 [2024-11-17 18:26:15.343390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:17.320 [2024-11-17 18:26:15.356797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158b160) with pdu=0x2000190df550 00:17:17.320 [2024-11-17 18:26:15.358117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.320 [2024-11-17 18:26:15.358152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:17.320 00:17:17.320 Latency(us) 00:17:17.320 [2024-11-17T18:26:15.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.320 [2024-11-17T18:26:15.587Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.320 nvme0n1 : 2.01 17028.30 66.52 0.00 0.00 7511.16 6136.55 21686.46 00:17:17.320 [2024-11-17T18:26:15.587Z] =================================================================================================================== 00:17:17.320 [2024-11-17T18:26:15.587Z] Total : 17028.30 66.52 0.00 0.00 7511.16 6136.55 21686.46 00:17:17.320 0 00:17:17.320 18:26:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:17.320 18:26:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:17.320 18:26:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:17.320 18:26:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:17.320 | .driver_specific 00:17:17.320 | .nvme_error 00:17:17.320 | .status_code 00:17:17.320 | .command_transient_transport_error' 00:17:17.580 18:26:15 -- host/digest.sh@71 -- # (( 133 > 0 )) 00:17:17.580 18:26:15 -- host/digest.sh@73 -- # killprocess 83552 00:17:17.580 18:26:15 -- common/autotest_common.sh@936 -- # '[' -z 83552 ']' 00:17:17.580 18:26:15 -- common/autotest_common.sh@940 -- # kill -0 83552 00:17:17.580 18:26:15 -- common/autotest_common.sh@941 -- # uname 00:17:17.580 18:26:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.580 18:26:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83552 00:17:17.580 killing process with pid 83552 00:17:17.580 Received shutdown signal, test time was about 2.000000 seconds 00:17:17.580 00:17:17.580 Latency(us) 00:17:17.580 [2024-11-17T18:26:15.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.580 [2024-11-17T18:26:15.847Z] =================================================================================================================== 00:17:17.580 [2024-11-17T18:26:15.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.580 18:26:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:17.580 18:26:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:17.580 18:26:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83552' 00:17:17.580 18:26:15 -- common/autotest_common.sh@955 -- # kill 83552 00:17:17.580 18:26:15 -- common/autotest_common.sh@960 -- # wait 83552 00:17:17.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:17.580 18:26:15 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:17.580 18:26:15 -- host/digest.sh@54 -- # local rw bs qd 00:17:17.580 18:26:15 -- host/digest.sh@56 -- # rw=randwrite 00:17:17.580 18:26:15 -- host/digest.sh@56 -- # bs=131072 00:17:17.580 18:26:15 -- host/digest.sh@56 -- # qd=16 00:17:17.580 18:26:15 -- host/digest.sh@58 -- # bperfpid=83614 00:17:17.580 18:26:15 -- host/digest.sh@60 -- # waitforlisten 83614 /var/tmp/bperf.sock 00:17:17.580 18:26:15 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:17.580 18:26:15 -- common/autotest_common.sh@829 -- # '[' -z 83614 ']' 00:17:17.580 18:26:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:17.580 18:26:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.580 18:26:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:17.580 18:26:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.580 18:26:15 -- common/autotest_common.sh@10 -- # set +x 00:17:17.839 [2024-11-17 18:26:15.851577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:17.839 [2024-11-17 18:26:15.851845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83614 ] 00:17:17.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:17.839 Zero copy mechanism will not be used. 00:17:17.839 [2024-11-17 18:26:15.984260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.839 [2024-11-17 18:26:16.017277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.776 18:26:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.776 18:26:16 -- common/autotest_common.sh@862 -- # return 0 00:17:18.776 18:26:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:18.776 18:26:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:19.033 18:26:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:19.033 18:26:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.033 18:26:17 -- common/autotest_common.sh@10 -- # set +x 00:17:19.033 18:26:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.033 18:26:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.033 18:26:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.291 nvme0n1 00:17:19.291 18:26:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:19.291 18:26:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.291 18:26:17 -- common/autotest_common.sh@10 -- # set +x 00:17:19.291 18:26:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.291 18:26:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:19.291 18:26:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:19.291 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:19.291 Zero copy mechanism will not be used. 00:17:19.291 Running I/O for 2 seconds... 00:17:19.291 [2024-11-17 18:26:17.504596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.291 [2024-11-17 18:26:17.504897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.291 [2024-11-17 18:26:17.504927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.291 [2024-11-17 18:26:17.509876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.291 [2024-11-17 18:26:17.510431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.291 [2024-11-17 18:26:17.510758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.291 [2024-11-17 18:26:17.515618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.291 [2024-11-17 18:26:17.515937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.291 [2024-11-17 18:26:17.515981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.291 [2024-11-17 18:26:17.520522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.291 [2024-11-17 18:26:17.520805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.291 [2024-11-17 18:26:17.520834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.291 [2024-11-17 18:26:17.525249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.292 [2024-11-17 18:26:17.525591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.292 [2024-11-17 18:26:17.525625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.292 [2024-11-17 18:26:17.530228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.292 [2024-11-17 18:26:17.530630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.292 [2024-11-17 18:26:17.530674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.292 [2024-11-17 18:26:17.535268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.292 [2024-11-17 18:26:17.535621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.292 [2024-11-17 18:26:17.535658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.292 [2024-11-17 18:26:17.540716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.292 [2024-11-17 18:26:17.541003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.292 [2024-11-17 18:26:17.541031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.292 [2024-11-17 18:26:17.545932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.292 [2024-11-17 18:26:17.546245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.292 [2024-11-17 18:26:17.546299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.292 [2024-11-17 18:26:17.551133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.292 [2024-11-17 18:26:17.551522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.292 [2024-11-17 18:26:17.551564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.557070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.557443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.562811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.563109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.563136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.567928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.568472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.568508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.573347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.573618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.573645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.578093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.578443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.578477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.583043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.583348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.583402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.587979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.588485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.588519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.593264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.551 [2024-11-17 18:26:17.593597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.551 [2024-11-17 18:26:17.593627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.551 [2024-11-17 18:26:17.598114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.598472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.598532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.603152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.603490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.603518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.608175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.608678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.608711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.613234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.613528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.613556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.617960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.618243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.622963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.623246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.623285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.627809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.628109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.628137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.632723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.633007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.633035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.637487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.637768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.637796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.642180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.642560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.642600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.647715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.648014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.648042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.652577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.652858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.652885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.657395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.657676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.657703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.662089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.662420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.662454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.667041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.667339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.667379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.671877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.672184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.672212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.676731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.677027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.677053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.681536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.681817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.681844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.686325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.686696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.686738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.691418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.691725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.691752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.696669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.696958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.697003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.701695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.702009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.702038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.707098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.707495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.707526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.712373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.712673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.712699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.717422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.717707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.717734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.722398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.722740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.722769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.727337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.552 [2024-11-17 18:26:17.727683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.552 [2024-11-17 18:26:17.727737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.552 [2024-11-17 18:26:17.732281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.732593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.732621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.737019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.737331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.737353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.742560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.742922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.742948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.747843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.748155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.748183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.753229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.753560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.753591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.758316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.758666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.758705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.763388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.763686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.763714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.768333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.768646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.768674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.773118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.773457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.773491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.778067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.778405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.778434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.783140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.783478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.783511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.788104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.788615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.788649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.793197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.793535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.793567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.798088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.798411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.798439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.803054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.803355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.803394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.807986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.808497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.553 [2024-11-17 18:26:17.813434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.553 [2024-11-17 18:26:17.813720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.553 [2024-11-17 18:26:17.813748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.813 [2024-11-17 18:26:17.818973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.813 [2024-11-17 18:26:17.819258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.813 [2024-11-17 18:26:17.819296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.813 [2024-11-17 18:26:17.824194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.813 [2024-11-17 18:26:17.824694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.813 [2024-11-17 18:26:17.824728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.813 [2024-11-17 18:26:17.829321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.813 [2024-11-17 18:26:17.829607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.813 [2024-11-17 18:26:17.829634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.813 [2024-11-17 18:26:17.834209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.813 [2024-11-17 18:26:17.834600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.813 [2024-11-17 18:26:17.834635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.813 [2024-11-17 18:26:17.839195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.813 [2024-11-17 18:26:17.839534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.813 [2024-11-17 18:26:17.839563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.813 [2024-11-17 18:26:17.844100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.813 [2024-11-17 18:26:17.844615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.844649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.849167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.849485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.849513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.854017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.854331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.854373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.858934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.859217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.859245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.863798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.864101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.864128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.868716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.869000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.869028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.873575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.873855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.873882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.878472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.878895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.883541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.883832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.883861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.888371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.888663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.888690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.893720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.894066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.899210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.899796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.899844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.904866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.905206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.905236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.910370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.910755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.910799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.915719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.916033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.920942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.921255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.921293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.925778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.926059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.926087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.930602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.930970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.930997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.935601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.935884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.935912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.940458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.940739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.940766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.945263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.945605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.945638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.950132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.950483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.950555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.955395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.955739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.955766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.960821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.961135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.961164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.966229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.966618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.966662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.971612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.971951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.971979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.814 [2024-11-17 18:26:17.976868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.814 [2024-11-17 18:26:17.977155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.814 [2024-11-17 18:26:17.977184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:17.982024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:17.982381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:17.982409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:17.987079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:17.987586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:17.987620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:17.992456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:17.992763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:17.992790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:17.997508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:17.997797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:17.997825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.002937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.003413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.003464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.008143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.008482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.008516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.013110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.013451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.013485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.018382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.018723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.018753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.023425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.023736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.023764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.028342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.028633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.028661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.033263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.033616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.033673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.038263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.038631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.038661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.043288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.043773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.043823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.048598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.048903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.048931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.053549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.053863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.053891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.058624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.059009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.059037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.063727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.064019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.064047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.068657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.068945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.068973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:19.815 [2024-11-17 18:26:18.073659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:19.815 [2024-11-17 18:26:18.074024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.815 [2024-11-17 18:26:18.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.075 [2024-11-17 18:26:18.079116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.075 [2024-11-17 18:26:18.079598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.079632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.084565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.084895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.084929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.089483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.089772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.089800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.094330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.094741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.094785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.099541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.099868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.099895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.104596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.104897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.104926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.109514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.109803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.109831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.114764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.115076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.115105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.119922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.120204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.120232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.124966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.125250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.125301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.129779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.130079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.130106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.134901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.135359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.135409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.140361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.140738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.140797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.145714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.146023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.146052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.150819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.151365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.151427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.156187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.156482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.156509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.161480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.161840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.161880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.166941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.167498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.167531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.172452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.172737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.172764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.177547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.177828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.177855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.182574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.182923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.182991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.187541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.187824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.187851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.192260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.192552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.192578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.196985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.197265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.197301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.201727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.202008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.202035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.206500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.206834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.076 [2024-11-17 18:26:18.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.076 [2024-11-17 18:26:18.211498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.076 [2024-11-17 18:26:18.211780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.211807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.216195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.216512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.216540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.220961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.221242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.221268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.225796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.226086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.226113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.230626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.231004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.231033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.235565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.235868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.235894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.240349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.240656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.240683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.245387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.245678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.245706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.250574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.250910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.250940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.256064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.256449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.256479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.261933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.262298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.262337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.267363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.267912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.272944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.273282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.273321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.278256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.278616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.278662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.283171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.283670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.283718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.288158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.288499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.288532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.293318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.293610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.293638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.298069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.298395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.298444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.302994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.303507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.303542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.308206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.308572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.308605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.313111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.313439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.313514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.318216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.318590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.318624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.323143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.323606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.323640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.328155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.328518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.328552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.333166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.333539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.333582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.077 [2024-11-17 18:26:18.338534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.077 [2024-11-17 18:26:18.338882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.077 [2024-11-17 18:26:18.338925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.338 [2024-11-17 18:26:18.343956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.338 [2024-11-17 18:26:18.344246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.338 [2024-11-17 18:26:18.344284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.338 [2024-11-17 18:26:18.349126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.338 [2024-11-17 18:26:18.349488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.338 [2024-11-17 18:26:18.349534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.354252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.354794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.354829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.359359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.359654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.359682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.364172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.364512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.364546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.369241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.369624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.369666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.374460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.374811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.374867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.379759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.380120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.380150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.385195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.385601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.385642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.391053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.391389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.391420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.396548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.396848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.396895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.401890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.402406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.402440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.407737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.408065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.408096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.413207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.413569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.413596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.418768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.419145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.419176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.424207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.424650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.424719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.429695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.430090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.430126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.435576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.435957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.435987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.441601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.441974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.442017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.447213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.447629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.447688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.452416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.452753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.452780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.457716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.458056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.458085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.463307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.463686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.463714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.468557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.468898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.468929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.473549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.473933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.474001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.478562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.478916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.478970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.483383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.483710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.483747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.488174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.339 [2024-11-17 18:26:18.488507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.339 [2024-11-17 18:26:18.488544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.339 [2024-11-17 18:26:18.493014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.493392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.493430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.497816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.498137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.498174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.502500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.502856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.502894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.507297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.507685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.507753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.512279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.512604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.517734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.518159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.523105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.523436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.523466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.527963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.528283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.528319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.532791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.533113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.533145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.537548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.537868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.537904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.542360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.542724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.542760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.547268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.547605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.547635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.551982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.552304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.552345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.556897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.557254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.557298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.562185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.562588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.562622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.567531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.567902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.567940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.572938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.573307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.573357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.578541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.578879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.578916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.583952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.584271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.584351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.589279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.589720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.589754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.594681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.595071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.595107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.340 [2024-11-17 18:26:18.600129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.340 [2024-11-17 18:26:18.600528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.340 [2024-11-17 18:26:18.600565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.605786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.606104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.606139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.611202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.611538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.611574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.616232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.616687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.621455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.621760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.626435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.626817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.626870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.631410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.631754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.631787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.636953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.637337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.637409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.642147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.642525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.642561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.647629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.647948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.648001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.652861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.653242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.653288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.657927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.658270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.658313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.663034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.663380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.663461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.668092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.668437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.668467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.672885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.673229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.673266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.677700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.678025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.678060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.682487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.682905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.682953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.687309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.687644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.692088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.692443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.692479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.696951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.697300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.697342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.701785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.601 [2024-11-17 18:26:18.702104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.601 [2024-11-17 18:26:18.702137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.601 [2024-11-17 18:26:18.706647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.706997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.707033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.711606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.711929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.711960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.716507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.716826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.716858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.721212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.721571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.721607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.726069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.726388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.726422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.730864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.731197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.731232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.735705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.736025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.736055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.740469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.740790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.745218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.745604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.745641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.750050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.750385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.750417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.754963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.755274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.759761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.760081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.760117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.764558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.764879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.764909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.769372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.769696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.769727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.774463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.774815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.779860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.780244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.780292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.784796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.785123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.785160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.789681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.790000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.790036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.794442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.794827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.794908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.799485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.799838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.804128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.804462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.804494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.809010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.809362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.809391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.813800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.814123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.814154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.818652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.818980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.819023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.823573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.823923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.823960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.828579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.828897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.828929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.602 [2024-11-17 18:26:18.833482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.602 [2024-11-17 18:26:18.833790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.602 [2024-11-17 18:26:18.833825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.603 [2024-11-17 18:26:18.838351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.603 [2024-11-17 18:26:18.838732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.603 [2024-11-17 18:26:18.838771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.603 [2024-11-17 18:26:18.844030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.603 [2024-11-17 18:26:18.844392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.603 [2024-11-17 18:26:18.844428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.603 [2024-11-17 18:26:18.849555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.603 [2024-11-17 18:26:18.849888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.603 [2024-11-17 18:26:18.849938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.603 [2024-11-17 18:26:18.854952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.603 [2024-11-17 18:26:18.855341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.603 [2024-11-17 18:26:18.855425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.603 [2024-11-17 18:26:18.860508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.603 [2024-11-17 18:26:18.860886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.603 [2024-11-17 18:26:18.860925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.866258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.866692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.866731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.871498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.871869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.871923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.876475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.876805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.876841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.881285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.881653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.881701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.886257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.886662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.886701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.891177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.891530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.891574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.896116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.896466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.896502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.901159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.901528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.901565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.906065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.906416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.906448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.911070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.911408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.911484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.915913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.916286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.916335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.921070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.921453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.921488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.926118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.926486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.926544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.863 [2024-11-17 18:26:18.931038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.863 [2024-11-17 18:26:18.931404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.863 [2024-11-17 18:26:18.931455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.935932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.936274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.936316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.940827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.941176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.941213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.945814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.946182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.946217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.950816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.951155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.951192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.955781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.956108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.956145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.960653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.961011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.961051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.965584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.965912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.965949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.970437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.970810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.970863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.975320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.975678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.975715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.980204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.980569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.980606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.985041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.985426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.985476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.990069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.990432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.990467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:18.995134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:18.995523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:18.995560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.000056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.000431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.000469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.005063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.005427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.005463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.010051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.010406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.010439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.015168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.015511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.015542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.020149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.020498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.025035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.025401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.025435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.030054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.030404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.030438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.035635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.036032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.036070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.040997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.041353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.041419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.045992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.864 [2024-11-17 18:26:19.046361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.864 [2024-11-17 18:26:19.046419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.864 [2024-11-17 18:26:19.050952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.051295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.051346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.055872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.056204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.056239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.060912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.061280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.061327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.065854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.066228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.070969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.071309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.071352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.075822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.076166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.076203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.080913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.081284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.081364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.086459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.086806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.086861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.092022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.092333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.092430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.097489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.097866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.097931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.103230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.103595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.103633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.108745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.109120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.109160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.114321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.114725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.114766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.120144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.120476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.120515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.865 [2024-11-17 18:26:19.125659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:20.865 [2024-11-17 18:26:19.126045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.865 [2024-11-17 18:26:19.126086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.135 [2024-11-17 18:26:19.131258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.135 [2024-11-17 18:26:19.131616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.135 [2024-11-17 18:26:19.131655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.135 [2024-11-17 18:26:19.136851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.135 [2024-11-17 18:26:19.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.135 [2024-11-17 18:26:19.137257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.135 [2024-11-17 18:26:19.142540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.135 [2024-11-17 18:26:19.142862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.135 [2024-11-17 18:26:19.142900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.135 [2024-11-17 18:26:19.147899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.135 [2024-11-17 18:26:19.148256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.135 [2024-11-17 18:26:19.148305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.135 [2024-11-17 18:26:19.153512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.135 [2024-11-17 18:26:19.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.135 [2024-11-17 18:26:19.153910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.159107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.159475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.159512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.164478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.164814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.164852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.169657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.170017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.170056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.174594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.174969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.175007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.179525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.179871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.179909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.184869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.185205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.185245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.189887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.190223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.190262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.194952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.195294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.195337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.199934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.200302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.200360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.205044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.205413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.205460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.210309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.210713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.215316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.215719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.220484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.220821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.220859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.225565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.225919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.225957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.230551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.230887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.230941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.235539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.235873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.235908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.240565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.240909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.240947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.245635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.245983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.246022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.250872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.251231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.251270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.255816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.256182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.256222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.260902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.261248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.261297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.265997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.266349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.266384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.271090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.271448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.271489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.276182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.276505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.276542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.281272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.281695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.281733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.286454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.286856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.286896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.291689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.292066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.292106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.297244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.297612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.297659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.302250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.302620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.302660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.307233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.307595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.307633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.312297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.312641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.312679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.317446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.317792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.317830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.322367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.322746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.322781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.327428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.327765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.327804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.332418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.332754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.332792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.337430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.337765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.337804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.342428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.342857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.347352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.347708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.347746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.352382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.352739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.352777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.357424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.357769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.357808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.362436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.362809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.362848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.367475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.367831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.367870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.372510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.372856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.372894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.377414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.136 [2024-11-17 18:26:19.377758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.136 [2024-11-17 18:26:19.377796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.136 [2024-11-17 18:26:19.382366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.137 [2024-11-17 18:26:19.382729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.137 [2024-11-17 18:26:19.382768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.137 [2024-11-17 18:26:19.387405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.137 [2024-11-17 18:26:19.387788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.137 [2024-11-17 18:26:19.387828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.137 [2024-11-17 18:26:19.392944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.137 [2024-11-17 18:26:19.393286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.137 [2024-11-17 18:26:19.393330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.398515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.398826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.398865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.403977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.404285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.404354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.409434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.409746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.409785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.414974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.415297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.415341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.420435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.420793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.420832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.425866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.426228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.426267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.431446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.431812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.431851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.436825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.437181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.437217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.442392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.442755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.442794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.447387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.447750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.447789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.452563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.452918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.452956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.457559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.457914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.457952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.462816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.463192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.463231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.467824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.468188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.468226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.473146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.473520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.473559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.478269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.478670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.478711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.483526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.483854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.488562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.488939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.488977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.396 [2024-11-17 18:26:19.493530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e30) with pdu=0x2000190fef90 00:17:21.396 [2024-11-17 18:26:19.493664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.396 [2024-11-17 18:26:19.493686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.396 00:17:21.396 Latency(us) 00:17:21.396 [2024-11-17T18:26:19.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.396 [2024-11-17T18:26:19.663Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:21.396 nvme0n1 : 2.00 6037.37 754.67 0.00 0.00 2644.37 2100.13 11081.54 00:17:21.396 [2024-11-17T18:26:19.663Z] =================================================================================================================== 00:17:21.396 [2024-11-17T18:26:19.663Z] Total : 6037.37 754.67 0.00 0.00 2644.37 2100.13 11081.54 00:17:21.396 0 00:17:21.396 18:26:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:21.396 18:26:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:21.396 18:26:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:21.396 | .driver_specific 00:17:21.396 | .nvme_error 00:17:21.396 | .status_code 00:17:21.396 | .command_transient_transport_error' 00:17:21.396 18:26:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:21.655 18:26:19 -- host/digest.sh@71 -- # (( 390 > 0 )) 00:17:21.655 18:26:19 -- host/digest.sh@73 -- # killprocess 83614 00:17:21.655 18:26:19 -- common/autotest_common.sh@936 -- # '[' -z 83614 ']' 00:17:21.655 18:26:19 -- common/autotest_common.sh@940 -- # kill -0 83614 00:17:21.655 18:26:19 -- common/autotest_common.sh@941 -- # uname 00:17:21.655 18:26:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.655 18:26:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83614 00:17:21.655 18:26:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:21.655 18:26:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:21.655 killing process with pid 83614 00:17:21.655 18:26:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83614' 00:17:21.655 Received shutdown signal, test time was about 2.000000 seconds 00:17:21.655 00:17:21.655 Latency(us) 00:17:21.655 [2024-11-17T18:26:19.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.655 [2024-11-17T18:26:19.922Z] =================================================================================================================== 00:17:21.655 [2024-11-17T18:26:19.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.655 18:26:19 -- common/autotest_common.sh@955 -- # kill 83614 00:17:21.655 18:26:19 -- common/autotest_common.sh@960 -- # wait 83614 00:17:21.914 18:26:19 -- host/digest.sh@115 -- # killprocess 83420 00:17:21.914 18:26:19 -- common/autotest_common.sh@936 -- # '[' -z 83420 ']' 00:17:21.914 18:26:19 -- common/autotest_common.sh@940 -- # kill -0 83420 00:17:21.914 18:26:19 -- common/autotest_common.sh@941 -- # uname 00:17:21.915 18:26:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.915 18:26:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83420 00:17:21.915 18:26:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:21.915 killing process with pid 83420 00:17:21.915 18:26:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:21.915 18:26:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83420' 00:17:21.915 18:26:19 -- common/autotest_common.sh@955 -- # kill 83420 00:17:21.915 18:26:19 -- common/autotest_common.sh@960 -- # wait 83420 00:17:21.915 00:17:21.915 real 0m16.532s 00:17:21.915 user 0m32.495s 00:17:21.915 sys 0m4.462s 00:17:21.915 18:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:21.915 18:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:21.915 ************************************ 00:17:21.915 END TEST nvmf_digest_error 00:17:21.915 ************************************ 00:17:21.915 18:26:20 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:21.915 18:26:20 -- host/digest.sh@139 -- # nvmftestfini 00:17:21.915 18:26:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:21.915 18:26:20 -- nvmf/common.sh@116 -- # sync 00:17:22.174 18:26:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:22.174 18:26:20 -- nvmf/common.sh@119 -- # set +e 00:17:22.174 18:26:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:22.174 18:26:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:22.174 rmmod nvme_tcp 00:17:22.174 rmmod nvme_fabrics 00:17:22.174 rmmod nvme_keyring 00:17:22.174 18:26:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:22.174 18:26:20 -- nvmf/common.sh@123 -- # set -e 00:17:22.174 18:26:20 -- nvmf/common.sh@124 -- # return 0 00:17:22.174 18:26:20 -- nvmf/common.sh@477 -- # '[' -n 83420 ']' 00:17:22.174 18:26:20 -- nvmf/common.sh@478 -- # killprocess 83420 00:17:22.174 18:26:20 -- common/autotest_common.sh@936 -- # '[' -z 83420 ']' 00:17:22.174 18:26:20 -- common/autotest_common.sh@940 -- # kill -0 83420 00:17:22.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83420) - No such process 00:17:22.174 Process with pid 83420 is not found 00:17:22.174 18:26:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83420 is not found' 00:17:22.174 18:26:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:22.174 18:26:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:22.174 18:26:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:22.174 18:26:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.174 18:26:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:22.174 18:26:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.174 18:26:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.174 18:26:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.174 18:26:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:22.174 00:17:22.174 real 0m32.551s 00:17:22.174 user 1m1.691s 00:17:22.174 sys 0m9.084s 00:17:22.174 18:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:22.174 18:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.174 ************************************ 00:17:22.174 END TEST nvmf_digest 00:17:22.174 ************************************ 00:17:22.174 18:26:20 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:22.174 18:26:20 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:22.174 18:26:20 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:22.174 18:26:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:22.174 18:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.174 18:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.174 ************************************ 00:17:22.174 START TEST nvmf_multipath 00:17:22.174 ************************************ 00:17:22.174 18:26:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:22.174 * Looking for test storage... 00:17:22.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.435 18:26:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:22.435 18:26:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:22.435 18:26:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:22.435 18:26:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:22.435 18:26:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:22.435 18:26:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:22.435 18:26:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:22.435 18:26:20 -- scripts/common.sh@335 -- # IFS=.-: 00:17:22.435 18:26:20 -- scripts/common.sh@335 -- # read -ra ver1 00:17:22.435 18:26:20 -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.435 18:26:20 -- scripts/common.sh@336 -- # read -ra ver2 00:17:22.435 18:26:20 -- scripts/common.sh@337 -- # local 'op=<' 00:17:22.435 18:26:20 -- scripts/common.sh@339 -- # ver1_l=2 00:17:22.435 18:26:20 -- scripts/common.sh@340 -- # ver2_l=1 00:17:22.435 18:26:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:22.435 18:26:20 -- scripts/common.sh@343 -- # case "$op" in 00:17:22.435 18:26:20 -- scripts/common.sh@344 -- # : 1 00:17:22.435 18:26:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:22.435 18:26:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.435 18:26:20 -- scripts/common.sh@364 -- # decimal 1 00:17:22.435 18:26:20 -- scripts/common.sh@352 -- # local d=1 00:17:22.435 18:26:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.435 18:26:20 -- scripts/common.sh@354 -- # echo 1 00:17:22.435 18:26:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:22.435 18:26:20 -- scripts/common.sh@365 -- # decimal 2 00:17:22.435 18:26:20 -- scripts/common.sh@352 -- # local d=2 00:17:22.435 18:26:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.435 18:26:20 -- scripts/common.sh@354 -- # echo 2 00:17:22.435 18:26:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:22.435 18:26:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:22.435 18:26:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:22.435 18:26:20 -- scripts/common.sh@367 -- # return 0 00:17:22.435 18:26:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.435 18:26:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.435 --rc genhtml_branch_coverage=1 00:17:22.435 --rc genhtml_function_coverage=1 00:17:22.435 --rc genhtml_legend=1 00:17:22.435 --rc geninfo_all_blocks=1 00:17:22.435 --rc geninfo_unexecuted_blocks=1 00:17:22.435 00:17:22.435 ' 00:17:22.435 18:26:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.435 --rc genhtml_branch_coverage=1 00:17:22.435 --rc genhtml_function_coverage=1 00:17:22.435 --rc genhtml_legend=1 00:17:22.435 --rc geninfo_all_blocks=1 00:17:22.435 --rc geninfo_unexecuted_blocks=1 00:17:22.435 00:17:22.435 ' 00:17:22.435 18:26:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.435 --rc genhtml_branch_coverage=1 00:17:22.435 --rc genhtml_function_coverage=1 00:17:22.435 --rc genhtml_legend=1 00:17:22.435 --rc geninfo_all_blocks=1 00:17:22.435 --rc geninfo_unexecuted_blocks=1 00:17:22.435 00:17:22.435 ' 00:17:22.435 18:26:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:22.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.435 --rc genhtml_branch_coverage=1 00:17:22.435 --rc genhtml_function_coverage=1 00:17:22.435 --rc genhtml_legend=1 00:17:22.435 --rc geninfo_all_blocks=1 00:17:22.435 --rc geninfo_unexecuted_blocks=1 00:17:22.435 00:17:22.435 ' 00:17:22.435 18:26:20 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.435 18:26:20 -- nvmf/common.sh@7 -- # uname -s 00:17:22.435 18:26:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.435 18:26:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.435 18:26:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.435 18:26:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.435 18:26:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.435 18:26:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.435 18:26:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.435 18:26:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.435 18:26:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.435 18:26:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.435 18:26:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:17:22.435 18:26:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:17:22.435 18:26:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.435 18:26:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.435 18:26:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.435 18:26:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.435 18:26:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.435 18:26:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.435 18:26:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.435 18:26:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.435 18:26:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.435 18:26:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.435 18:26:20 -- paths/export.sh@5 -- # export PATH 00:17:22.435 18:26:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.435 18:26:20 -- nvmf/common.sh@46 -- # : 0 00:17:22.435 18:26:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:22.435 18:26:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:22.435 18:26:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:22.435 18:26:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.435 18:26:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.435 18:26:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:22.435 18:26:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:22.435 18:26:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:22.436 18:26:20 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.436 18:26:20 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.436 18:26:20 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.436 18:26:20 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:22.436 18:26:20 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.436 18:26:20 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:22.436 18:26:20 -- host/multipath.sh@30 -- # nvmftestinit 00:17:22.436 18:26:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:22.436 18:26:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.436 18:26:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:22.436 18:26:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:22.436 18:26:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:22.436 18:26:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.436 18:26:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.436 18:26:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.436 18:26:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:22.436 18:26:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:22.436 18:26:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:22.436 18:26:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:22.436 18:26:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:22.436 18:26:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:22.436 18:26:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.436 18:26:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.436 18:26:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:22.436 18:26:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:22.436 18:26:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.436 18:26:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.436 18:26:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.436 18:26:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.436 18:26:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.436 18:26:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.436 18:26:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.436 18:26:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.436 18:26:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:22.436 18:26:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:22.436 Cannot find device "nvmf_tgt_br" 00:17:22.436 18:26:20 -- nvmf/common.sh@154 -- # true 00:17:22.436 18:26:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.436 Cannot find device "nvmf_tgt_br2" 00:17:22.436 18:26:20 -- nvmf/common.sh@155 -- # true 00:17:22.436 18:26:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:22.436 18:26:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:22.436 Cannot find device "nvmf_tgt_br" 00:17:22.436 18:26:20 -- nvmf/common.sh@157 -- # true 00:17:22.436 18:26:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:22.436 Cannot find device "nvmf_tgt_br2" 00:17:22.436 18:26:20 -- nvmf/common.sh@158 -- # true 00:17:22.436 18:26:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:22.436 18:26:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:22.436 18:26:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.436 18:26:20 -- nvmf/common.sh@161 -- # true 00:17:22.436 18:26:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.436 18:26:20 -- nvmf/common.sh@162 -- # true 00:17:22.436 18:26:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.436 18:26:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.436 18:26:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.696 18:26:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.696 18:26:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.696 18:26:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.696 18:26:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.696 18:26:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:22.696 18:26:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:22.696 18:26:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:22.696 18:26:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:22.696 18:26:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:22.696 18:26:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:22.696 18:26:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.696 18:26:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.696 18:26:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.696 18:26:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:22.696 18:26:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:22.696 18:26:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.696 18:26:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.696 18:26:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.696 18:26:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.696 18:26:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.696 18:26:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:22.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:22.696 00:17:22.696 --- 10.0.0.2 ping statistics --- 00:17:22.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.696 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:22.696 18:26:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:22.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:17:22.696 00:17:22.696 --- 10.0.0.3 ping statistics --- 00:17:22.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.696 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:22.696 18:26:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:22.696 00:17:22.696 --- 10.0.0.1 ping statistics --- 00:17:22.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.696 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:22.696 18:26:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.696 18:26:20 -- nvmf/common.sh@421 -- # return 0 00:17:22.696 18:26:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:22.696 18:26:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.696 18:26:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:22.696 18:26:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:22.696 18:26:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.696 18:26:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:22.696 18:26:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:22.696 18:26:20 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:22.696 18:26:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.696 18:26:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.696 18:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.696 18:26:20 -- nvmf/common.sh@469 -- # nvmfpid=83883 00:17:22.696 18:26:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:22.696 18:26:20 -- nvmf/common.sh@470 -- # waitforlisten 83883 00:17:22.696 18:26:20 -- common/autotest_common.sh@829 -- # '[' -z 83883 ']' 00:17:22.696 18:26:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.696 18:26:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.696 18:26:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.696 18:26:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.696 18:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.696 [2024-11-17 18:26:20.942940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:22.696 [2024-11-17 18:26:20.943038] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.955 [2024-11-17 18:26:21.083899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.955 [2024-11-17 18:26:21.116493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.955 [2024-11-17 18:26:21.116619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.955 [2024-11-17 18:26:21.116631] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.955 [2024-11-17 18:26:21.116639] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.955 [2024-11-17 18:26:21.116810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.955 [2024-11-17 18:26:21.116819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.955 18:26:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.955 18:26:21 -- common/autotest_common.sh@862 -- # return 0 00:17:22.955 18:26:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.955 18:26:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.955 18:26:21 -- common/autotest_common.sh@10 -- # set +x 00:17:23.213 18:26:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.213 18:26:21 -- host/multipath.sh@33 -- # nvmfapp_pid=83883 00:17:23.213 18:26:21 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:23.213 [2024-11-17 18:26:21.439408] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.213 18:26:21 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:23.789 Malloc0 00:17:23.789 18:26:21 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:23.789 18:26:22 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.375 18:26:22 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.375 [2024-11-17 18:26:22.579492] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.375 18:26:22 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:24.634 [2024-11-17 18:26:22.815642] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:24.634 18:26:22 -- host/multipath.sh@44 -- # bdevperf_pid=83931 00:17:24.634 18:26:22 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:24.634 18:26:22 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.634 18:26:22 -- host/multipath.sh@47 -- # waitforlisten 83931 /var/tmp/bdevperf.sock 00:17:24.634 18:26:22 -- common/autotest_common.sh@829 -- # '[' -z 83931 ']' 00:17:24.634 18:26:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.634 18:26:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.634 18:26:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.634 18:26:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.634 18:26:22 -- common/autotest_common.sh@10 -- # set +x 00:17:24.893 18:26:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.893 18:26:23 -- common/autotest_common.sh@862 -- # return 0 00:17:24.893 18:26:23 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:25.152 18:26:23 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:25.412 Nvme0n1 00:17:25.412 18:26:23 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:25.979 Nvme0n1 00:17:25.979 18:26:23 -- host/multipath.sh@78 -- # sleep 1 00:17:25.979 18:26:23 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:26.914 18:26:24 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:26.914 18:26:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:27.173 18:26:25 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:27.432 18:26:25 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:27.432 18:26:25 -- host/multipath.sh@65 -- # dtrace_pid=83963 00:17:27.432 18:26:25 -- host/multipath.sh@66 -- # sleep 6 00:17:27.432 18:26:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:33.992 18:26:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:33.992 18:26:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:33.992 18:26:31 -- host/multipath.sh@67 -- # active_port=4421 00:17:33.992 Attaching 4 probes... 00:17:33.992 @path[10.0.0.2, 4421]: 19641 00:17:33.992 @path[10.0.0.2, 4421]: 20031 00:17:33.992 @path[10.0.0.2, 4421]: 20123 00:17:33.992 @path[10.0.0.2, 4421]: 19960 00:17:33.992 @path[10.0.0.2, 4421]: 20009 00:17:33.992 18:26:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:33.992 18:26:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:33.992 18:26:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:33.992 18:26:31 -- host/multipath.sh@69 -- # sed -n 1p 00:17:33.992 18:26:31 -- host/multipath.sh@69 -- # port=4421 00:17:33.992 18:26:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:33.992 18:26:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:33.992 18:26:31 -- host/multipath.sh@72 -- # kill 83963 00:17:33.992 18:26:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:33.992 18:26:31 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:33.992 18:26:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:33.992 18:26:32 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:34.250 18:26:32 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:34.250 18:26:32 -- host/multipath.sh@65 -- # dtrace_pid=84082 00:17:34.251 18:26:32 -- host/multipath.sh@66 -- # sleep 6 00:17:34.251 18:26:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:40.871 18:26:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:40.871 18:26:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:40.871 18:26:38 -- host/multipath.sh@67 -- # active_port=4420 00:17:40.871 18:26:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:40.871 Attaching 4 probes... 00:17:40.871 @path[10.0.0.2, 4420]: 20261 00:17:40.871 @path[10.0.0.2, 4420]: 20066 00:17:40.871 @path[10.0.0.2, 4420]: 20180 00:17:40.871 @path[10.0.0.2, 4420]: 20094 00:17:40.871 @path[10.0.0.2, 4420]: 20268 00:17:40.871 18:26:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:40.871 18:26:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:40.871 18:26:38 -- host/multipath.sh@69 -- # sed -n 1p 00:17:40.871 18:26:38 -- host/multipath.sh@69 -- # port=4420 00:17:40.871 18:26:38 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:40.871 18:26:38 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:40.871 18:26:38 -- host/multipath.sh@72 -- # kill 84082 00:17:40.871 18:26:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:40.871 18:26:38 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:40.871 18:26:38 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:40.871 18:26:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:41.150 18:26:39 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:41.150 18:26:39 -- host/multipath.sh@65 -- # dtrace_pid=84200 00:17:41.150 18:26:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:41.150 18:26:39 -- host/multipath.sh@66 -- # sleep 6 00:17:47.713 18:26:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:47.713 18:26:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:47.713 18:26:45 -- host/multipath.sh@67 -- # active_port=4421 00:17:47.713 18:26:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:47.713 Attaching 4 probes... 00:17:47.713 @path[10.0.0.2, 4421]: 15168 00:17:47.713 @path[10.0.0.2, 4421]: 19735 00:17:47.713 @path[10.0.0.2, 4421]: 19838 00:17:47.713 @path[10.0.0.2, 4421]: 19946 00:17:47.713 @path[10.0.0.2, 4421]: 20169 00:17:47.713 18:26:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:47.713 18:26:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:47.713 18:26:45 -- host/multipath.sh@69 -- # sed -n 1p 00:17:47.713 18:26:45 -- host/multipath.sh@69 -- # port=4421 00:17:47.713 18:26:45 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:47.713 18:26:45 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:47.713 18:26:45 -- host/multipath.sh@72 -- # kill 84200 00:17:47.713 18:26:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:47.713 18:26:45 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:47.713 18:26:45 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:47.713 18:26:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:47.972 18:26:46 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:47.972 18:26:46 -- host/multipath.sh@65 -- # dtrace_pid=84312 00:17:47.972 18:26:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:47.972 18:26:46 -- host/multipath.sh@66 -- # sleep 6 00:17:54.537 18:26:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:54.537 18:26:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:54.537 18:26:52 -- host/multipath.sh@67 -- # active_port= 00:17:54.537 18:26:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.537 Attaching 4 probes... 00:17:54.537 00:17:54.537 00:17:54.537 00:17:54.537 00:17:54.537 00:17:54.537 18:26:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:54.537 18:26:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:54.537 18:26:52 -- host/multipath.sh@69 -- # sed -n 1p 00:17:54.537 18:26:52 -- host/multipath.sh@69 -- # port= 00:17:54.537 18:26:52 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:54.537 18:26:52 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:54.537 18:26:52 -- host/multipath.sh@72 -- # kill 84312 00:17:54.537 18:26:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.537 18:26:52 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:54.537 18:26:52 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:54.537 18:26:52 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:54.796 18:26:52 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:54.796 18:26:52 -- host/multipath.sh@65 -- # dtrace_pid=84430 00:17:54.796 18:26:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:54.796 18:26:52 -- host/multipath.sh@66 -- # sleep 6 00:18:01.427 18:26:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:01.427 18:26:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:01.427 18:26:59 -- host/multipath.sh@67 -- # active_port=4421 00:18:01.427 18:26:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:01.427 Attaching 4 probes... 00:18:01.427 @path[10.0.0.2, 4421]: 19143 00:18:01.427 @path[10.0.0.2, 4421]: 19743 00:18:01.427 @path[10.0.0.2, 4421]: 19425 00:18:01.427 @path[10.0.0.2, 4421]: 19470 00:18:01.427 @path[10.0.0.2, 4421]: 19649 00:18:01.427 18:26:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:01.427 18:26:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:01.427 18:26:59 -- host/multipath.sh@69 -- # sed -n 1p 00:18:01.427 18:26:59 -- host/multipath.sh@69 -- # port=4421 00:18:01.427 18:26:59 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:01.427 18:26:59 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:01.427 18:26:59 -- host/multipath.sh@72 -- # kill 84430 00:18:01.427 18:26:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:01.427 18:26:59 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:01.427 [2024-11-17 18:26:59.417808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.417996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.427 [2024-11-17 18:26:59.418074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 [2024-11-17 18:26:59.418413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e7a0 is same with the state(5) to be set 00:18:01.428 18:26:59 -- host/multipath.sh@101 -- # sleep 1 00:18:02.365 18:27:00 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:02.365 18:27:00 -- host/multipath.sh@65 -- # dtrace_pid=84554 00:18:02.365 18:27:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:02.365 18:27:00 -- host/multipath.sh@66 -- # sleep 6 00:18:08.927 18:27:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:08.927 18:27:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:08.927 18:27:06 -- host/multipath.sh@67 -- # active_port=4420 00:18:08.927 18:27:06 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.927 Attaching 4 probes... 00:18:08.927 @path[10.0.0.2, 4420]: 18816 00:18:08.927 @path[10.0.0.2, 4420]: 19178 00:18:08.927 @path[10.0.0.2, 4420]: 19327 00:18:08.927 @path[10.0.0.2, 4420]: 19483 00:18:08.927 @path[10.0.0.2, 4420]: 19566 00:18:08.927 18:27:06 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:08.927 18:27:06 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:08.927 18:27:06 -- host/multipath.sh@69 -- # sed -n 1p 00:18:08.927 18:27:06 -- host/multipath.sh@69 -- # port=4420 00:18:08.927 18:27:06 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:08.927 18:27:06 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:08.927 18:27:06 -- host/multipath.sh@72 -- # kill 84554 00:18:08.927 18:27:06 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.927 18:27:06 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:08.927 [2024-11-17 18:27:06.987378] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:08.927 18:27:07 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:09.184 18:27:07 -- host/multipath.sh@111 -- # sleep 6 00:18:15.748 18:27:13 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:15.748 18:27:13 -- host/multipath.sh@65 -- # dtrace_pid=84728 00:18:15.748 18:27:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:15.748 18:27:13 -- host/multipath.sh@66 -- # sleep 6 00:18:21.029 18:27:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:21.029 18:27:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:21.288 18:27:19 -- host/multipath.sh@67 -- # active_port=4421 00:18:21.288 18:27:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.288 Attaching 4 probes... 00:18:21.288 @path[10.0.0.2, 4421]: 19681 00:18:21.288 @path[10.0.0.2, 4421]: 19827 00:18:21.288 @path[10.0.0.2, 4421]: 20007 00:18:21.288 @path[10.0.0.2, 4421]: 19976 00:18:21.288 @path[10.0.0.2, 4421]: 19592 00:18:21.288 18:27:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:21.288 18:27:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:21.288 18:27:19 -- host/multipath.sh@69 -- # sed -n 1p 00:18:21.288 18:27:19 -- host/multipath.sh@69 -- # port=4421 00:18:21.288 18:27:19 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.288 18:27:19 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.288 18:27:19 -- host/multipath.sh@72 -- # kill 84728 00:18:21.288 18:27:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.288 18:27:19 -- host/multipath.sh@114 -- # killprocess 83931 00:18:21.288 18:27:19 -- common/autotest_common.sh@936 -- # '[' -z 83931 ']' 00:18:21.288 18:27:19 -- common/autotest_common.sh@940 -- # kill -0 83931 00:18:21.288 18:27:19 -- common/autotest_common.sh@941 -- # uname 00:18:21.288 18:27:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.288 18:27:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83931 00:18:21.562 18:27:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:21.562 18:27:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:21.562 killing process with pid 83931 00:18:21.562 18:27:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83931' 00:18:21.562 18:27:19 -- common/autotest_common.sh@955 -- # kill 83931 00:18:21.562 18:27:19 -- common/autotest_common.sh@960 -- # wait 83931 00:18:21.562 Connection closed with partial response: 00:18:21.562 00:18:21.562 00:18:21.562 18:27:19 -- host/multipath.sh@116 -- # wait 83931 00:18:21.562 18:27:19 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:21.562 [2024-11-17 18:26:22.871429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:21.562 [2024-11-17 18:26:22.871547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83931 ] 00:18:21.562 [2024-11-17 18:26:23.001535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.562 [2024-11-17 18:26:23.036156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.562 Running I/O for 90 seconds... 00:18:21.562 [2024-11-17 18:26:32.327060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.327132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.327295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.327676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.327708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.327836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.327900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.327967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.327988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.328020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.562 [2024-11-17 18:26:32.328057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.328104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.328142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.328180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.328217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.328270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.562 [2024-11-17 18:26:32.328319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:21.562 [2024-11-17 18:26:32.328354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.328896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.328965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.328986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.563 [2024-11-17 18:26:32.329797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.563 [2024-11-17 18:26:32.329863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:21.563 [2024-11-17 18:26:32.329883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.329896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.329933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.329963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.329989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.330950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.330974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.331066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.331163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.331267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.564 [2024-11-17 18:26:32.331466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.331500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.564 [2024-11-17 18:26:32.331533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.564 [2024-11-17 18:26:32.331553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.331569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.331589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.331603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.331623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.331644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.331665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.331679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.331699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.331713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.331734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.331748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.333490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.333607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.333642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.333682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.333731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.333933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.333971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.334009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:32.334025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.334061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.334082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:32.334105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:32.334121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.865984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:38.866045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:38.866121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:38.866192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:38.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:38.866365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.565 [2024-11-17 18:26:38.866462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:21.565 [2024-11-17 18:26:38.866738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.565 [2024-11-17 18:26:38.866752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.866786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.866819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.866854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.866868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.866888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.866916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.866954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.866974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.866994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.867758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.867970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.867990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.868004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.868023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.868037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.868058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.868071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.566 [2024-11-17 18:26:38.868105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.868125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.868139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.868159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.566 [2024-11-17 18:26:38.868172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.566 [2024-11-17 18:26:38.868192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.868758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.868971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.868991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.567 [2024-11-17 18:26:38.869491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:21.567 [2024-11-17 18:26:38.869616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.567 [2024-11-17 18:26:38.869630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.869652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.869666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.869688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.869703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.869725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.869747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.869771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.869786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.871861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.871944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.871986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:38.872006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.872047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.872063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.872090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.872104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.872132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.872145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:38.872172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:38.872186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.986891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.568 [2024-11-17 18:26:45.986978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:21.568 [2024-11-17 18:26:45.987232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.568 [2024-11-17 18:26:45.987245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.987961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.987981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.987996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.569 [2024-11-17 18:26:45.988666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.569 [2024-11-17 18:26:45.988800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:21.569 [2024-11-17 18:26:45.988820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.988855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.988903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.988925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.988940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.988963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.988978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.989936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.989971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.989992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.990006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.990040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.990075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.570 [2024-11-17 18:26:45.990110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.990145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.990180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.990216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.990273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.990308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.570 [2024-11-17 18:26:45.990352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:21.570 [2024-11-17 18:26:45.990390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.990885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.990920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.990955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.990976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.990991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.571 [2024-11-17 18:26:45.991476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.991636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.991651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.993196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.993226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:21.571 [2024-11-17 18:26:45.993279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.571 [2024-11-17 18:26:45.993296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.993918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.993974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.993989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.572 [2024-11-17 18:26:45.994762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.572 [2024-11-17 18:26:45.994829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:21.572 [2024-11-17 18:26:45.994849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.994865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.994885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.994899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.994920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.994951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.994973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.994987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:45.995932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:45.995952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:45.995969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:46.008444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:46.008489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:46.008527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.573 [2024-11-17 18:26:46.008563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:46.008600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:46.008679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:46.008734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:46.008770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:46.008806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.573 [2024-11-17 18:26:46.008841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.573 [2024-11-17 18:26:46.008862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.008893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.008928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.008950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.008965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.008987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.009052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.009124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.009166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.009203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.009238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.009972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.009992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.010006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.010143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.010185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.010220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.574 [2024-11-17 18:26:46.010396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.574 [2024-11-17 18:26:46.010434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:21.574 [2024-11-17 18:26:46.010456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.010472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.010545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.010594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.010643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.010694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.010754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.010824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.010873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.010911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.010930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.013428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.013592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.013774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.013962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.013982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.014032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.014081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.014130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.014179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.014228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.014317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.014370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.014448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.575 [2024-11-17 18:26:46.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:21.575 [2024-11-17 18:26:46.014546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.575 [2024-11-17 18:26:46.014567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.014616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.014667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.014717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.014766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.014815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.014872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.014921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.014952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.014984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.015915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.015964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.015993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.016064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.016317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.016382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.016432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-11-17 18:26:46.016581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.576 [2024-11-17 18:26:46.016630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.576 [2024-11-17 18:26:46.016668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.016706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.016736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.016756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.016785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.016805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.016834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.016854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.016892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.016912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.016942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.016962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.017849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.017947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.017973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.018025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-11-17 18:26:46.018074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.577 [2024-11-17 18:26:46.018798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:21.577 [2024-11-17 18:26:46.018827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.018847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.018883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.018903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.018933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.018953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.018982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.019921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.019970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.019999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.020019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.020048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.020067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.022570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.022718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.022788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.022848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-11-17 18:26:46.022896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-11-17 18:26:46.022965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:21.578 [2024-11-17 18:26:46.022985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.022998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.023930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.023964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.023991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.024007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.024041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.024075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.024109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.024142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.024175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.024208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.024241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-11-17 18:26:46.024275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:21.579 [2024-11-17 18:26:46.024327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-11-17 18:26:46.024355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.024891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.024960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.024990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-11-17 18:26:46.025852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-11-17 18:26:46.025886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:21.580 [2024-11-17 18:26:46.025922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.025942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.025963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.025980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.026025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.026821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.026856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.026964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.026978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.027125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.027194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.027229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-11-17 18:26:46.027362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.027398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.581 [2024-11-17 18:26:46.027419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-11-17 18:26:46.027434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:46.027810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:46.027836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.418984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.418998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-11-17 18:26:59.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-11-17 18:26:59.419436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-11-17 18:26:59.419517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-11-17 18:26:59.419571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-11-17 18:26:59.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-11-17 18:26:59.419781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.419807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.419849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.419876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.419903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.419930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.419975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.419989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.420951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-11-17 18:26:59.420980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.420995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-11-17 18:26:59.421008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-11-17 18:26:59.421023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.421957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-11-17 18:26:59.421989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.422003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.422016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.422031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-11-17 18:26:59.422050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-11-17 18:26:59.422065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.585 [2024-11-17 18:26:59.422207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.585 [2024-11-17 18:26:59.422234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-11-17 18:26:59.422480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1412100 is same with the state(5) to be set 00:18:21.585 [2024-11-17 18:26:59.422531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-11-17 18:26:59.422542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-11-17 18:26:59.422554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12336 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-11-17 18:26:59.422568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-11-17 18:26:59.422617] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1412100 was disconnected and freed. reset controller. 00:18:21.585 [2024-11-17 18:26:59.423742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.585 [2024-11-17 18:26:59.423844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14213c0 (9): Bad file descriptor 00:18:21.585 [2024-11-17 18:26:59.424181] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.585 [2024-11-17 18:26:59.424253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.585 [2024-11-17 18:26:59.424303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.585 [2024-11-17 18:26:59.424324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14213c0 with addr=10.0.0.2, port=4421 00:18:21.585 [2024-11-17 18:26:59.424340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14213c0 is same with the state(5) to be set 00:18:21.585 [2024-11-17 18:26:59.424392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14213c0 (9): Bad file descriptor 00:18:21.585 [2024-11-17 18:26:59.424422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.585 [2024-11-17 18:26:59.424438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:21.585 [2024-11-17 18:26:59.424451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.585 [2024-11-17 18:26:59.424480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:21.585 [2024-11-17 18:26:59.424497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.585 [2024-11-17 18:27:09.473031] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:21.585 Received shutdown signal, test time was about 55.478700 seconds 00:18:21.585 00:18:21.585 Latency(us) 00:18:21.585 [2024-11-17T18:27:19.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.585 [2024-11-17T18:27:19.852Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.585 Verification LBA range: start 0x0 length 0x4000 00:18:21.585 Nvme0n1 : 55.48 11248.95 43.94 0.00 0.00 11364.02 411.46 7076934.75 00:18:21.585 [2024-11-17T18:27:19.852Z] =================================================================================================================== 00:18:21.585 [2024-11-17T18:27:19.852Z] Total : 11248.95 43.94 0.00 0.00 11364.02 411.46 7076934.75 00:18:21.585 18:27:19 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.845 18:27:20 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:21.845 18:27:20 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:21.845 18:27:20 -- host/multipath.sh@125 -- # nvmftestfini 00:18:21.845 18:27:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:21.845 18:27:20 -- nvmf/common.sh@116 -- # sync 00:18:21.845 18:27:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:21.845 18:27:20 -- nvmf/common.sh@119 -- # set +e 00:18:21.845 18:27:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:21.845 18:27:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:21.845 rmmod nvme_tcp 00:18:21.845 rmmod nvme_fabrics 00:18:21.845 rmmod nvme_keyring 00:18:22.104 18:27:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.104 18:27:20 -- nvmf/common.sh@123 -- # set -e 00:18:22.104 18:27:20 -- nvmf/common.sh@124 -- # return 0 00:18:22.104 18:27:20 -- nvmf/common.sh@477 -- # '[' -n 83883 ']' 00:18:22.104 18:27:20 -- nvmf/common.sh@478 -- # killprocess 83883 00:18:22.104 18:27:20 -- common/autotest_common.sh@936 -- # '[' -z 83883 ']' 00:18:22.104 18:27:20 -- common/autotest_common.sh@940 -- # kill -0 83883 00:18:22.104 18:27:20 -- common/autotest_common.sh@941 -- # uname 00:18:22.104 18:27:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.104 18:27:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83883 00:18:22.104 18:27:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:22.104 killing process with pid 83883 00:18:22.104 18:27:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:22.104 18:27:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83883' 00:18:22.104 18:27:20 -- common/autotest_common.sh@955 -- # kill 83883 00:18:22.104 18:27:20 -- common/autotest_common.sh@960 -- # wait 83883 00:18:22.104 18:27:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.104 18:27:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.104 18:27:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.104 18:27:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.104 18:27:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.104 18:27:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.104 18:27:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.104 18:27:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.104 18:27:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:22.104 00:18:22.104 real 0m59.973s 00:18:22.104 user 2m46.608s 00:18:22.104 sys 0m17.898s 00:18:22.104 18:27:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:22.104 ************************************ 00:18:22.104 END TEST nvmf_multipath 00:18:22.104 18:27:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.104 ************************************ 00:18:22.363 18:27:20 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:22.363 18:27:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.363 18:27:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.363 18:27:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.363 ************************************ 00:18:22.363 START TEST nvmf_timeout 00:18:22.363 ************************************ 00:18:22.363 18:27:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:22.363 * Looking for test storage... 00:18:22.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:22.363 18:27:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:22.363 18:27:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:22.363 18:27:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:22.363 18:27:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:22.363 18:27:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:22.363 18:27:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:22.363 18:27:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:22.363 18:27:20 -- scripts/common.sh@335 -- # IFS=.-: 00:18:22.363 18:27:20 -- scripts/common.sh@335 -- # read -ra ver1 00:18:22.363 18:27:20 -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.363 18:27:20 -- scripts/common.sh@336 -- # read -ra ver2 00:18:22.363 18:27:20 -- scripts/common.sh@337 -- # local 'op=<' 00:18:22.363 18:27:20 -- scripts/common.sh@339 -- # ver1_l=2 00:18:22.363 18:27:20 -- scripts/common.sh@340 -- # ver2_l=1 00:18:22.363 18:27:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:22.363 18:27:20 -- scripts/common.sh@343 -- # case "$op" in 00:18:22.363 18:27:20 -- scripts/common.sh@344 -- # : 1 00:18:22.363 18:27:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:22.363 18:27:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.363 18:27:20 -- scripts/common.sh@364 -- # decimal 1 00:18:22.363 18:27:20 -- scripts/common.sh@352 -- # local d=1 00:18:22.363 18:27:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.363 18:27:20 -- scripts/common.sh@354 -- # echo 1 00:18:22.363 18:27:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:22.363 18:27:20 -- scripts/common.sh@365 -- # decimal 2 00:18:22.363 18:27:20 -- scripts/common.sh@352 -- # local d=2 00:18:22.363 18:27:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.363 18:27:20 -- scripts/common.sh@354 -- # echo 2 00:18:22.363 18:27:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:22.363 18:27:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:22.363 18:27:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:22.363 18:27:20 -- scripts/common.sh@367 -- # return 0 00:18:22.363 18:27:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.363 18:27:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:22.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.363 --rc genhtml_branch_coverage=1 00:18:22.363 --rc genhtml_function_coverage=1 00:18:22.363 --rc genhtml_legend=1 00:18:22.363 --rc geninfo_all_blocks=1 00:18:22.363 --rc geninfo_unexecuted_blocks=1 00:18:22.363 00:18:22.363 ' 00:18:22.363 18:27:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:22.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.363 --rc genhtml_branch_coverage=1 00:18:22.363 --rc genhtml_function_coverage=1 00:18:22.363 --rc genhtml_legend=1 00:18:22.363 --rc geninfo_all_blocks=1 00:18:22.363 --rc geninfo_unexecuted_blocks=1 00:18:22.363 00:18:22.363 ' 00:18:22.363 18:27:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:22.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.363 --rc genhtml_branch_coverage=1 00:18:22.363 --rc genhtml_function_coverage=1 00:18:22.363 --rc genhtml_legend=1 00:18:22.363 --rc geninfo_all_blocks=1 00:18:22.363 --rc geninfo_unexecuted_blocks=1 00:18:22.363 00:18:22.363 ' 00:18:22.363 18:27:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:22.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.363 --rc genhtml_branch_coverage=1 00:18:22.363 --rc genhtml_function_coverage=1 00:18:22.363 --rc genhtml_legend=1 00:18:22.363 --rc geninfo_all_blocks=1 00:18:22.363 --rc geninfo_unexecuted_blocks=1 00:18:22.363 00:18:22.363 ' 00:18:22.363 18:27:20 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.363 18:27:20 -- nvmf/common.sh@7 -- # uname -s 00:18:22.363 18:27:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.363 18:27:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.363 18:27:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.363 18:27:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.363 18:27:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.363 18:27:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.363 18:27:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.363 18:27:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.363 18:27:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.363 18:27:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.622 18:27:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:18:22.622 18:27:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:18:22.622 18:27:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.622 18:27:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.622 18:27:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.622 18:27:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.622 18:27:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.622 18:27:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.622 18:27:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.622 18:27:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.622 18:27:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.622 18:27:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.622 18:27:20 -- paths/export.sh@5 -- # export PATH 00:18:22.622 18:27:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.622 18:27:20 -- nvmf/common.sh@46 -- # : 0 00:18:22.622 18:27:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.622 18:27:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.622 18:27:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.622 18:27:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.622 18:27:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.622 18:27:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.622 18:27:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.622 18:27:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.622 18:27:20 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.622 18:27:20 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.622 18:27:20 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.622 18:27:20 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:22.623 18:27:20 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.623 18:27:20 -- host/timeout.sh@19 -- # nvmftestinit 00:18:22.623 18:27:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.623 18:27:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.623 18:27:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.623 18:27:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.623 18:27:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.623 18:27:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.623 18:27:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.623 18:27:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.623 18:27:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:22.623 18:27:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:22.623 18:27:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:22.623 18:27:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:22.623 18:27:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:22.623 18:27:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:22.623 18:27:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.623 18:27:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.623 18:27:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.623 18:27:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:22.623 18:27:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.623 18:27:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.623 18:27:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.623 18:27:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.623 18:27:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.623 18:27:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.623 18:27:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.623 18:27:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.623 18:27:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:22.623 18:27:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:22.623 Cannot find device "nvmf_tgt_br" 00:18:22.623 18:27:20 -- nvmf/common.sh@154 -- # true 00:18:22.623 18:27:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.623 Cannot find device "nvmf_tgt_br2" 00:18:22.623 18:27:20 -- nvmf/common.sh@155 -- # true 00:18:22.623 18:27:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:22.623 18:27:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:22.623 Cannot find device "nvmf_tgt_br" 00:18:22.623 18:27:20 -- nvmf/common.sh@157 -- # true 00:18:22.623 18:27:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:22.623 Cannot find device "nvmf_tgt_br2" 00:18:22.623 18:27:20 -- nvmf/common.sh@158 -- # true 00:18:22.623 18:27:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:22.623 18:27:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:22.623 18:27:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.623 18:27:20 -- nvmf/common.sh@161 -- # true 00:18:22.623 18:27:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.623 18:27:20 -- nvmf/common.sh@162 -- # true 00:18:22.623 18:27:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.623 18:27:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.623 18:27:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.623 18:27:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.623 18:27:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.623 18:27:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.623 18:27:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.623 18:27:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:22.623 18:27:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:22.623 18:27:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:22.623 18:27:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:22.623 18:27:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:22.623 18:27:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:22.623 18:27:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.882 18:27:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.882 18:27:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.882 18:27:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:22.882 18:27:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:22.882 18:27:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.882 18:27:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.882 18:27:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.882 18:27:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.882 18:27:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.882 18:27:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:22.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:18:22.882 00:18:22.882 --- 10.0.0.2 ping statistics --- 00:18:22.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.882 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:22.882 18:27:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:22.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:22.882 00:18:22.882 --- 10.0.0.3 ping statistics --- 00:18:22.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.882 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:22.882 18:27:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:22.882 00:18:22.882 --- 10.0.0.1 ping statistics --- 00:18:22.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.882 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:22.882 18:27:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.882 18:27:20 -- nvmf/common.sh@421 -- # return 0 00:18:22.882 18:27:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:22.882 18:27:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.882 18:27:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:22.882 18:27:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:22.882 18:27:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.882 18:27:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:22.882 18:27:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:22.882 18:27:21 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:22.882 18:27:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:22.882 18:27:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.882 18:27:21 -- common/autotest_common.sh@10 -- # set +x 00:18:22.882 18:27:21 -- nvmf/common.sh@469 -- # nvmfpid=85048 00:18:22.882 18:27:21 -- nvmf/common.sh@470 -- # waitforlisten 85048 00:18:22.882 18:27:21 -- common/autotest_common.sh@829 -- # '[' -z 85048 ']' 00:18:22.882 18:27:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.882 18:27:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:22.882 18:27:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.882 18:27:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.882 18:27:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.882 18:27:21 -- common/autotest_common.sh@10 -- # set +x 00:18:22.882 [2024-11-17 18:27:21.060887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:22.882 [2024-11-17 18:27:21.060991] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.141 [2024-11-17 18:27:21.199088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:23.141 [2024-11-17 18:27:21.241669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.141 [2024-11-17 18:27:21.241880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.141 [2024-11-17 18:27:21.241896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.141 [2024-11-17 18:27:21.241908] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.141 [2024-11-17 18:27:21.243342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.141 [2024-11-17 18:27:21.243365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.141 18:27:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.141 18:27:21 -- common/autotest_common.sh@862 -- # return 0 00:18:23.141 18:27:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:23.141 18:27:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.141 18:27:21 -- common/autotest_common.sh@10 -- # set +x 00:18:23.141 18:27:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.141 18:27:21 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.141 18:27:21 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:23.400 [2024-11-17 18:27:21.621289] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.400 18:27:21 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:23.967 Malloc0 00:18:23.967 18:27:21 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.967 18:27:22 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.226 18:27:22 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.485 [2024-11-17 18:27:22.645656] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.485 18:27:22 -- host/timeout.sh@32 -- # bdevperf_pid=85084 00:18:24.485 18:27:22 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:24.485 18:27:22 -- host/timeout.sh@34 -- # waitforlisten 85084 /var/tmp/bdevperf.sock 00:18:24.485 18:27:22 -- common/autotest_common.sh@829 -- # '[' -z 85084 ']' 00:18:24.485 18:27:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.485 18:27:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.485 18:27:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.485 18:27:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.485 18:27:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.485 [2024-11-17 18:27:22.704426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:24.485 [2024-11-17 18:27:22.704534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85084 ] 00:18:24.744 [2024-11-17 18:27:22.837896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.744 [2024-11-17 18:27:22.879632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.680 18:27:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.680 18:27:23 -- common/autotest_common.sh@862 -- # return 0 00:18:25.680 18:27:23 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:25.680 18:27:23 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:25.939 NVMe0n1 00:18:25.939 18:27:24 -- host/timeout.sh@51 -- # rpc_pid=85109 00:18:25.939 18:27:24 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.939 18:27:24 -- host/timeout.sh@53 -- # sleep 1 00:18:26.197 Running I/O for 10 seconds... 00:18:27.135 18:27:25 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.135 [2024-11-17 18:27:25.394798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.394992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30a60 is same with the state(5) to be set 00:18:27.135 [2024-11-17 18:27:25.395182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.135 [2024-11-17 18:27:25.395214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.135 [2024-11-17 18:27:25.395238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.135 [2024-11-17 18:27:25.395250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.135 [2024-11-17 18:27:25.395263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.135 [2024-11-17 18:27:25.395288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.135 [2024-11-17 18:27:25.395303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.135 [2024-11-17 18:27:25.395313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.395786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.395808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.395850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.395872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.395914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.395936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.395977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.395989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.396000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.396021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.396042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.396063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.136 [2024-11-17 18:27:25.396085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.396106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.396127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.136 [2024-11-17 18:27:25.396148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.136 [2024-11-17 18:27:25.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.396769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.396982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.396993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.137 [2024-11-17 18:27:25.397003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.137 [2024-11-17 18:27:25.397016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.137 [2024-11-17 18:27:25.397026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.138 [2024-11-17 18:27:25.397855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.138 [2024-11-17 18:27:25.397876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.138 [2024-11-17 18:27:25.397888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.139 [2024-11-17 18:27:25.397897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.397909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.139 [2024-11-17 18:27:25.397919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.397930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.139 [2024-11-17 18:27:25.397943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.397955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.139 [2024-11-17 18:27:25.397965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.397976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.139 [2024-11-17 18:27:25.397986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.397997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b9a0 is same with the state(5) to be set 00:18:27.139 [2024-11-17 18:27:25.398009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:27.139 [2024-11-17 18:27:25.398017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:27.139 [2024-11-17 18:27:25.398026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122976 len:8 PRP1 0x0 PRP2 0x0 00:18:27.139 [2024-11-17 18:27:25.398035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.398077] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa0b9a0 was disconnected and freed. reset controller. 00:18:27.139 [2024-11-17 18:27:25.398165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.139 [2024-11-17 18:27:25.398190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.398202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.139 [2024-11-17 18:27:25.398212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.398222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.139 [2024-11-17 18:27:25.398231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.398241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.139 [2024-11-17 18:27:25.398251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.139 [2024-11-17 18:27:25.398260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa10610 is same with the state(5) to be set 00:18:27.398 [2024-11-17 18:27:25.398522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.398 [2024-11-17 18:27:25.398560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10610 (9): Bad file descriptor 00:18:27.398 [2024-11-17 18:27:25.398671] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.398 [2024-11-17 18:27:25.398737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.398 [2024-11-17 18:27:25.398800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.398 [2024-11-17 18:27:25.398818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa10610 with addr=10.0.0.2, port=4420 00:18:27.398 [2024-11-17 18:27:25.398829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa10610 is same with the state(5) to be set 00:18:27.398 [2024-11-17 18:27:25.398849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10610 (9): Bad file descriptor 00:18:27.398 [2024-11-17 18:27:25.398867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:27.398 [2024-11-17 18:27:25.398877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:27.398 [2024-11-17 18:27:25.398887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:27.398 [2024-11-17 18:27:25.398909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:27.398 [2024-11-17 18:27:25.398936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.398 18:27:25 -- host/timeout.sh@56 -- # sleep 2 00:18:29.301 [2024-11-17 18:27:27.399041] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.301 [2024-11-17 18:27:27.399159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.301 [2024-11-17 18:27:27.399205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.301 [2024-11-17 18:27:27.399223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa10610 with addr=10.0.0.2, port=4420 00:18:29.301 [2024-11-17 18:27:27.399236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa10610 is same with the state(5) to be set 00:18:29.301 [2024-11-17 18:27:27.399261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10610 (9): Bad file descriptor 00:18:29.301 [2024-11-17 18:27:27.399304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:29.301 [2024-11-17 18:27:27.399317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:29.301 [2024-11-17 18:27:27.399327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.301 [2024-11-17 18:27:27.399370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.301 [2024-11-17 18:27:27.399398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:29.301 18:27:27 -- host/timeout.sh@57 -- # get_controller 00:18:29.301 18:27:27 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.301 18:27:27 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:29.560 18:27:27 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:29.560 18:27:27 -- host/timeout.sh@58 -- # get_bdev 00:18:29.560 18:27:27 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:29.560 18:27:27 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:29.819 18:27:27 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:29.819 18:27:27 -- host/timeout.sh@61 -- # sleep 5 00:18:31.196 [2024-11-17 18:27:29.399506] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.196 [2024-11-17 18:27:29.399628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.196 [2024-11-17 18:27:29.399673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.196 [2024-11-17 18:27:29.399690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa10610 with addr=10.0.0.2, port=4420 00:18:31.196 [2024-11-17 18:27:29.399703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa10610 is same with the state(5) to be set 00:18:31.196 [2024-11-17 18:27:29.399727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10610 (9): Bad file descriptor 00:18:31.196 [2024-11-17 18:27:29.399746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.196 [2024-11-17 18:27:29.399755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:31.196 [2024-11-17 18:27:29.399766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.196 [2024-11-17 18:27:29.399793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.196 [2024-11-17 18:27:29.399836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.727 [2024-11-17 18:27:31.399864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.727 [2024-11-17 18:27:31.399933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.727 [2024-11-17 18:27:31.399946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:33.727 [2024-11-17 18:27:31.399957] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:33.727 [2024-11-17 18:27:31.399985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.295 00:18:34.295 Latency(us) 00:18:34.295 [2024-11-17T18:27:32.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.295 [2024-11-17T18:27:32.562Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.295 Verification LBA range: start 0x0 length 0x4000 00:18:34.295 NVMe0n1 : 8.12 1886.25 7.37 15.76 0.00 67201.62 3321.48 7015926.69 00:18:34.295 [2024-11-17T18:27:32.562Z] =================================================================================================================== 00:18:34.295 [2024-11-17T18:27:32.562Z] Total : 1886.25 7.37 15.76 0.00 67201.62 3321.48 7015926.69 00:18:34.295 0 00:18:34.869 18:27:32 -- host/timeout.sh@62 -- # get_controller 00:18:34.869 18:27:32 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:34.869 18:27:32 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:35.162 18:27:33 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:35.162 18:27:33 -- host/timeout.sh@63 -- # get_bdev 00:18:35.162 18:27:33 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:35.162 18:27:33 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:35.421 18:27:33 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:35.421 18:27:33 -- host/timeout.sh@65 -- # wait 85109 00:18:35.421 18:27:33 -- host/timeout.sh@67 -- # killprocess 85084 00:18:35.421 18:27:33 -- common/autotest_common.sh@936 -- # '[' -z 85084 ']' 00:18:35.421 18:27:33 -- common/autotest_common.sh@940 -- # kill -0 85084 00:18:35.421 18:27:33 -- common/autotest_common.sh@941 -- # uname 00:18:35.421 18:27:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.421 18:27:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85084 00:18:35.421 killing process with pid 85084 00:18:35.421 Received shutdown signal, test time was about 9.307903 seconds 00:18:35.421 00:18:35.421 Latency(us) 00:18:35.421 [2024-11-17T18:27:33.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.421 [2024-11-17T18:27:33.688Z] =================================================================================================================== 00:18:35.421 [2024-11-17T18:27:33.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.421 18:27:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:35.421 18:27:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:35.421 18:27:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85084' 00:18:35.421 18:27:33 -- common/autotest_common.sh@955 -- # kill 85084 00:18:35.421 18:27:33 -- common/autotest_common.sh@960 -- # wait 85084 00:18:35.679 18:27:33 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.679 [2024-11-17 18:27:33.927558] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.938 18:27:33 -- host/timeout.sh@74 -- # bdevperf_pid=85230 00:18:35.938 18:27:33 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:35.938 18:27:33 -- host/timeout.sh@76 -- # waitforlisten 85230 /var/tmp/bdevperf.sock 00:18:35.938 18:27:33 -- common/autotest_common.sh@829 -- # '[' -z 85230 ']' 00:18:35.938 18:27:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.938 18:27:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.938 18:27:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.938 18:27:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.938 18:27:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.938 [2024-11-17 18:27:33.989212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:35.938 [2024-11-17 18:27:33.989305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85230 ] 00:18:35.938 [2024-11-17 18:27:34.123997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.938 [2024-11-17 18:27:34.158353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.873 18:27:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.873 18:27:34 -- common/autotest_common.sh@862 -- # return 0 00:18:36.873 18:27:34 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:37.131 18:27:35 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:37.389 NVMe0n1 00:18:37.389 18:27:35 -- host/timeout.sh@84 -- # rpc_pid=85254 00:18:37.389 18:27:35 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.389 18:27:35 -- host/timeout.sh@86 -- # sleep 1 00:18:37.389 Running I/O for 10 seconds... 00:18:38.324 18:27:36 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.585 [2024-11-17 18:27:36.707031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.585 [2024-11-17 18:27:36.707164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.586 [2024-11-17 18:27:36.707172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.586 [2024-11-17 18:27:36.707179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.586 [2024-11-17 18:27:36.707188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa301b0 is same with the state(5) to be set 00:18:38.586 [2024-11-17 18:27:36.707250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.586 [2024-11-17 18:27:36.707913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.707984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.707993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.708004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.708013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.708024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.708033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.708044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.586 [2024-11-17 18:27:36.708053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.586 [2024-11-17 18:27:36.708064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.587 [2024-11-17 18:27:36.708829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.587 [2024-11-17 18:27:36.708873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.587 [2024-11-17 18:27:36.708882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.708893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.588 [2024-11-17 18:27:36.708901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.708911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.588 [2024-11-17 18:27:36.708919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.708930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.708939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.708949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.588 [2024-11-17 18:27:36.708958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.708968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.708977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.708988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.708997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.588 [2024-11-17 18:27:36.709265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.588 [2024-11-17 18:27:36.709316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.588 [2024-11-17 18:27:36.709607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.588 [2024-11-17 18:27:36.709617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.589 [2024-11-17 18:27:36.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.589 [2024-11-17 18:27:36.709664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.589 [2024-11-17 18:27:36.709684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.589 [2024-11-17 18:27:36.709760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.589 [2024-11-17 18:27:36.709800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.589 [2024-11-17 18:27:36.709962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.709974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1530870 is same with the state(5) to be set 00:18:38.589 [2024-11-17 18:27:36.709986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:38.589 [2024-11-17 18:27:36.709994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:38.589 [2024-11-17 18:27:36.710003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125832 len:8 PRP1 0x0 PRP2 0x0 00:18:38.589 [2024-11-17 18:27:36.710012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.589 [2024-11-17 18:27:36.710055] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1530870 was disconnected and freed. reset controller. 00:18:38.589 [2024-11-17 18:27:36.710332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.589 [2024-11-17 18:27:36.710423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:38.589 [2024-11-17 18:27:36.710539] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.589 [2024-11-17 18:27:36.710603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.589 [2024-11-17 18:27:36.710647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.589 [2024-11-17 18:27:36.710663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535450 with addr=10.0.0.2, port=4420 00:18:38.589 [2024-11-17 18:27:36.710674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535450 is same with the state(5) to be set 00:18:38.589 [2024-11-17 18:27:36.710692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:38.589 [2024-11-17 18:27:36.710709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:38.589 [2024-11-17 18:27:36.710717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:38.589 [2024-11-17 18:27:36.710728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:38.589 [2024-11-17 18:27:36.710748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:38.589 [2024-11-17 18:27:36.710763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.589 18:27:36 -- host/timeout.sh@90 -- # sleep 1 00:18:39.524 [2024-11-17 18:27:37.710929] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.524 [2024-11-17 18:27:37.711058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.524 [2024-11-17 18:27:37.711100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.524 [2024-11-17 18:27:37.711116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535450 with addr=10.0.0.2, port=4420 00:18:39.524 [2024-11-17 18:27:37.711128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535450 is same with the state(5) to be set 00:18:39.524 [2024-11-17 18:27:37.711153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:39.524 [2024-11-17 18:27:37.711170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:39.524 [2024-11-17 18:27:37.711179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:39.524 [2024-11-17 18:27:37.711188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.524 [2024-11-17 18:27:37.711213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.524 [2024-11-17 18:27:37.711224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:39.524 18:27:37 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.781 [2024-11-17 18:27:37.981356] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.781 18:27:38 -- host/timeout.sh@92 -- # wait 85254 00:18:40.716 [2024-11-17 18:27:38.724217] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:48.833 00:18:48.833 Latency(us) 00:18:48.833 [2024-11-17T18:27:47.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.833 [2024-11-17T18:27:47.100Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:48.833 Verification LBA range: start 0x0 length 0x4000 00:18:48.833 NVMe0n1 : 10.01 9771.17 38.17 0.00 0.00 13079.36 997.93 3019898.88 00:18:48.833 [2024-11-17T18:27:47.100Z] =================================================================================================================== 00:18:48.833 [2024-11-17T18:27:47.100Z] Total : 9771.17 38.17 0.00 0.00 13079.36 997.93 3019898.88 00:18:48.833 0 00:18:48.833 18:27:45 -- host/timeout.sh@97 -- # rpc_pid=85364 00:18:48.833 18:27:45 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.833 18:27:45 -- host/timeout.sh@98 -- # sleep 1 00:18:48.833 Running I/O for 10 seconds... 00:18:48.833 18:27:46 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.833 [2024-11-17 18:27:46.882114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2dd80 is same with the state(5) to be set 00:18:48.833 [2024-11-17 18:27:46.882351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.833 [2024-11-17 18:27:46.882650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.833 [2024-11-17 18:27:46.882661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.882888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.882984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.882994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.834 [2024-11-17 18:27:46.883022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.834 [2024-11-17 18:27:46.883210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.834 [2024-11-17 18:27:46.883219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.835 [2024-11-17 18:27:46.883785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.835 [2024-11-17 18:27:46.883796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.835 [2024-11-17 18:27:46.883806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.883826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.883845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.883865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.883884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.883903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.883923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.883942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.883961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.883980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.883991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.884000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.884020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.884058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.884383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.836 [2024-11-17 18:27:46.884428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.836 [2024-11-17 18:27:46.884459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.836 [2024-11-17 18:27:46.884468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.837 [2024-11-17 18:27:46.884724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.837 [2024-11-17 18:27:46.884765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.837 [2024-11-17 18:27:46.884784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.837 [2024-11-17 18:27:46.884842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.837 [2024-11-17 18:27:46.884881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.884988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.884997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.885008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.837 [2024-11-17 18:27:46.885019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.885031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e87a0 is same with the state(5) to be set 00:18:48.837 [2024-11-17 18:27:46.885043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:48.837 [2024-11-17 18:27:46.885050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:48.837 [2024-11-17 18:27:46.885058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125832 len:8 PRP1 0x0 PRP2 0x0 00:18:48.837 [2024-11-17 18:27:46.885067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.837 [2024-11-17 18:27:46.885108] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15e87a0 was disconnected and freed. reset controller. 00:18:48.837 [2024-11-17 18:27:46.885380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.837 [2024-11-17 18:27:46.885469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:48.837 [2024-11-17 18:27:46.885581] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.837 [2024-11-17 18:27:46.885647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.837 [2024-11-17 18:27:46.885690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.837 [2024-11-17 18:27:46.885706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535450 with addr=10.0.0.2, port=4420 00:18:48.837 [2024-11-17 18:27:46.885717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535450 is same with the state(5) to be set 00:18:48.837 [2024-11-17 18:27:46.885737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:48.837 [2024-11-17 18:27:46.885753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.837 [2024-11-17 18:27:46.885762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:48.837 [2024-11-17 18:27:46.885772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.837 [2024-11-17 18:27:46.885793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.837 [2024-11-17 18:27:46.885804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.837 18:27:46 -- host/timeout.sh@101 -- # sleep 3 00:18:49.775 [2024-11-17 18:27:47.885927] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.775 [2024-11-17 18:27:47.886047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.775 [2024-11-17 18:27:47.886089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.775 [2024-11-17 18:27:47.886105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535450 with addr=10.0.0.2, port=4420 00:18:49.775 [2024-11-17 18:27:47.886117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535450 is same with the state(5) to be set 00:18:49.775 [2024-11-17 18:27:47.886142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:49.775 [2024-11-17 18:27:47.886158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:49.775 [2024-11-17 18:27:47.886167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:49.775 [2024-11-17 18:27:47.886176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:49.775 [2024-11-17 18:27:47.886201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:49.775 [2024-11-17 18:27:47.886211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.711 [2024-11-17 18:27:48.886322] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.712 [2024-11-17 18:27:48.886431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.712 [2024-11-17 18:27:48.886490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.712 [2024-11-17 18:27:48.886509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535450 with addr=10.0.0.2, port=4420 00:18:50.712 [2024-11-17 18:27:48.886522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535450 is same with the state(5) to be set 00:18:50.712 [2024-11-17 18:27:48.886548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:50.712 [2024-11-17 18:27:48.886567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:50.712 [2024-11-17 18:27:48.886576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:50.712 [2024-11-17 18:27:48.886587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.712 [2024-11-17 18:27:48.886613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:50.712 [2024-11-17 18:27:48.886625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:51.656 [2024-11-17 18:27:49.888269] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.656 [2024-11-17 18:27:49.888410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.656 [2024-11-17 18:27:49.888455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.656 [2024-11-17 18:27:49.888472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535450 with addr=10.0.0.2, port=4420 00:18:51.656 [2024-11-17 18:27:49.888484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535450 is same with the state(5) to be set 00:18:51.656 [2024-11-17 18:27:49.888627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535450 (9): Bad file descriptor 00:18:51.656 [2024-11-17 18:27:49.888729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:51.656 [2024-11-17 18:27:49.888744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:51.656 [2024-11-17 18:27:49.888754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:51.656 [2024-11-17 18:27:49.891138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.656 [2024-11-17 18:27:49.891185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:51.656 18:27:49 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.916 [2024-11-17 18:27:50.175207] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.175 18:27:50 -- host/timeout.sh@103 -- # wait 85364 00:18:52.742 [2024-11-17 18:27:50.916501] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.017 00:18:58.017 Latency(us) 00:18:58.017 [2024-11-17T18:27:56.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.017 [2024-11-17T18:27:56.284Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.017 Verification LBA range: start 0x0 length 0x4000 00:18:58.017 NVMe0n1 : 10.01 8464.95 33.07 6115.80 0.00 8765.23 562.27 3019898.88 00:18:58.017 [2024-11-17T18:27:56.284Z] =================================================================================================================== 00:18:58.017 [2024-11-17T18:27:56.284Z] Total : 8464.95 33.07 6115.80 0.00 8765.23 0.00 3019898.88 00:18:58.017 0 00:18:58.017 18:27:55 -- host/timeout.sh@105 -- # killprocess 85230 00:18:58.017 18:27:55 -- common/autotest_common.sh@936 -- # '[' -z 85230 ']' 00:18:58.017 18:27:55 -- common/autotest_common.sh@940 -- # kill -0 85230 00:18:58.017 18:27:55 -- common/autotest_common.sh@941 -- # uname 00:18:58.017 18:27:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:58.017 18:27:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85230 00:18:58.017 killing process with pid 85230 00:18:58.017 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.017 00:18:58.017 Latency(us) 00:18:58.017 [2024-11-17T18:27:56.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.017 [2024-11-17T18:27:56.284Z] =================================================================================================================== 00:18:58.017 [2024-11-17T18:27:56.284Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.017 18:27:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:58.017 18:27:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:58.017 18:27:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85230' 00:18:58.017 18:27:55 -- common/autotest_common.sh@955 -- # kill 85230 00:18:58.017 18:27:55 -- common/autotest_common.sh@960 -- # wait 85230 00:18:58.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.017 18:27:55 -- host/timeout.sh@110 -- # bdevperf_pid=85478 00:18:58.017 18:27:55 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:58.017 18:27:55 -- host/timeout.sh@112 -- # waitforlisten 85478 /var/tmp/bdevperf.sock 00:18:58.017 18:27:55 -- common/autotest_common.sh@829 -- # '[' -z 85478 ']' 00:18:58.017 18:27:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.017 18:27:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.017 18:27:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.017 18:27:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.017 18:27:55 -- common/autotest_common.sh@10 -- # set +x 00:18:58.017 [2024-11-17 18:27:56.002223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:58.017 [2024-11-17 18:27:56.002356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85478 ] 00:18:58.017 [2024-11-17 18:27:56.136091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.017 [2024-11-17 18:27:56.170405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.955 18:27:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.955 18:27:56 -- common/autotest_common.sh@862 -- # return 0 00:18:58.955 18:27:56 -- host/timeout.sh@116 -- # dtrace_pid=85493 00:18:58.955 18:27:56 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85478 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:58.955 18:27:56 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:59.214 18:27:57 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:59.472 NVMe0n1 00:18:59.472 18:27:57 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.472 18:27:57 -- host/timeout.sh@124 -- # rpc_pid=85536 00:18:59.472 18:27:57 -- host/timeout.sh@125 -- # sleep 1 00:18:59.731 Running I/O for 10 seconds... 00:19:00.673 18:27:58 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.673 [2024-11-17 18:27:58.903660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.903992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.673 [2024-11-17 18:27:58.904353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdec20 is same with the state(5) to be set 00:19:00.674 [2024-11-17 18:27:58.904897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.904930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.904952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.904963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.904985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.904996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.674 [2024-11-17 18:27:58.905184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.674 [2024-11-17 18:27:58.905193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.905980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.905990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.906001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.906011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.675 [2024-11-17 18:27:58.906031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.675 [2024-11-17 18:27:58.906043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.676 [2024-11-17 18:27:58.906883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.676 [2024-11-17 18:27:58.906894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.906903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.906915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.906924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.906935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.906945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.906956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.906965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.906978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.906988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.906999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.677 [2024-11-17 18:27:58.907585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.677 [2024-11-17 18:27:58.907596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.678 [2024-11-17 18:27:58.907605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.678 [2024-11-17 18:27:58.907616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.678 [2024-11-17 18:27:58.907626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.678 [2024-11-17 18:27:58.907637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.678 [2024-11-17 18:27:58.907646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.678 [2024-11-17 18:27:58.907658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.678 [2024-11-17 18:27:58.907667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.678 [2024-11-17 18:27:58.907680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bc9f0 is same with the state(5) to be set 00:19:00.678 [2024-11-17 18:27:58.907693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:00.678 [2024-11-17 18:27:58.907702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:00.678 [2024-11-17 18:27:58.907712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83792 len:8 PRP1 0x0 PRP2 0x0 00:19:00.678 [2024-11-17 18:27:58.907721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.678 [2024-11-17 18:27:58.907766] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5bc9f0 was disconnected and freed. reset controller. 00:19:00.678 [2024-11-17 18:27:58.908073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.678 [2024-11-17 18:27:58.908181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1470 (9): Bad file descriptor 00:19:00.678 [2024-11-17 18:27:58.908323] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.678 [2024-11-17 18:27:58.908404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.678 [2024-11-17 18:27:58.908451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.678 [2024-11-17 18:27:58.908469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1470 with addr=10.0.0.2, port=4420 00:19:00.678 [2024-11-17 18:27:58.908480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c1470 is same with the state(5) to be set 00:19:00.678 [2024-11-17 18:27:58.908501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1470 (9): Bad file descriptor 00:19:00.678 [2024-11-17 18:27:58.908520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.678 [2024-11-17 18:27:58.908530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:00.678 [2024-11-17 18:27:58.908541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.678 [2024-11-17 18:27:58.908562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:00.678 [2024-11-17 18:27:58.908574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.678 18:27:58 -- host/timeout.sh@128 -- # wait 85536 00:19:03.213 [2024-11-17 18:28:00.908749] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.213 [2024-11-17 18:28:00.908867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.213 [2024-11-17 18:28:00.908914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.213 [2024-11-17 18:28:00.908932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1470 with addr=10.0.0.2, port=4420 00:19:03.213 [2024-11-17 18:28:00.908944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c1470 is same with the state(5) to be set 00:19:03.213 [2024-11-17 18:28:00.908971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1470 (9): Bad file descriptor 00:19:03.213 [2024-11-17 18:28:00.908990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.213 [2024-11-17 18:28:00.908999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:03.213 [2024-11-17 18:28:00.909010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.213 [2024-11-17 18:28:00.909037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.213 [2024-11-17 18:28:00.909049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.155 [2024-11-17 18:28:02.909243] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.155 [2024-11-17 18:28:02.909405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.155 [2024-11-17 18:28:02.909455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.155 [2024-11-17 18:28:02.909473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1470 with addr=10.0.0.2, port=4420 00:19:05.155 [2024-11-17 18:28:02.909487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c1470 is same with the state(5) to be set 00:19:05.155 [2024-11-17 18:28:02.909513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1470 (9): Bad file descriptor 00:19:05.155 [2024-11-17 18:28:02.909543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:05.155 [2024-11-17 18:28:02.909555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:05.155 [2024-11-17 18:28:02.909566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.155 [2024-11-17 18:28:02.909596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:05.155 [2024-11-17 18:28:02.909609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.062 [2024-11-17 18:28:04.909689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.062 [2024-11-17 18:28:04.909781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.062 [2024-11-17 18:28:04.909793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.062 [2024-11-17 18:28:04.909804] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:07.062 [2024-11-17 18:28:04.909832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.000 00:19:08.000 Latency(us) 00:19:08.000 [2024-11-17T18:28:06.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.000 [2024-11-17T18:28:06.267Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:08.000 NVMe0n1 : 8.13 2141.18 8.36 15.75 0.00 59244.30 7357.91 7046430.72 00:19:08.000 [2024-11-17T18:28:06.267Z] =================================================================================================================== 00:19:08.000 [2024-11-17T18:28:06.267Z] Total : 2141.18 8.36 15.75 0.00 59244.30 7357.91 7046430.72 00:19:08.000 0 00:19:08.000 18:28:05 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.000 Attaching 5 probes... 00:19:08.000 1412.432717: reset bdev controller NVMe0 00:19:08.000 1412.624421: reconnect bdev controller NVMe0 00:19:08.000 3412.984313: reconnect delay bdev controller NVMe0 00:19:08.000 3413.022011: reconnect bdev controller NVMe0 00:19:08.000 5413.494743: reconnect delay bdev controller NVMe0 00:19:08.000 5413.533622: reconnect bdev controller NVMe0 00:19:08.000 7414.037387: reconnect delay bdev controller NVMe0 00:19:08.000 7414.056284: reconnect bdev controller NVMe0 00:19:08.000 18:28:05 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:08.000 18:28:05 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:08.000 18:28:05 -- host/timeout.sh@136 -- # kill 85493 00:19:08.000 18:28:05 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.000 18:28:05 -- host/timeout.sh@139 -- # killprocess 85478 00:19:08.000 18:28:05 -- common/autotest_common.sh@936 -- # '[' -z 85478 ']' 00:19:08.000 18:28:05 -- common/autotest_common.sh@940 -- # kill -0 85478 00:19:08.000 18:28:05 -- common/autotest_common.sh@941 -- # uname 00:19:08.000 18:28:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.000 18:28:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85478 00:19:08.000 killing process with pid 85478 00:19:08.000 Received shutdown signal, test time was about 8.193214 seconds 00:19:08.000 00:19:08.000 Latency(us) 00:19:08.000 [2024-11-17T18:28:06.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.000 [2024-11-17T18:28:06.267Z] =================================================================================================================== 00:19:08.000 [2024-11-17T18:28:06.267Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.000 18:28:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:08.000 18:28:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:08.000 18:28:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85478' 00:19:08.000 18:28:05 -- common/autotest_common.sh@955 -- # kill 85478 00:19:08.000 18:28:05 -- common/autotest_common.sh@960 -- # wait 85478 00:19:08.000 18:28:06 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.260 18:28:06 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:08.260 18:28:06 -- host/timeout.sh@145 -- # nvmftestfini 00:19:08.260 18:28:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:08.260 18:28:06 -- nvmf/common.sh@116 -- # sync 00:19:08.260 18:28:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:08.260 18:28:06 -- nvmf/common.sh@119 -- # set +e 00:19:08.260 18:28:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:08.260 18:28:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:08.260 rmmod nvme_tcp 00:19:08.260 rmmod nvme_fabrics 00:19:08.260 rmmod nvme_keyring 00:19:08.260 18:28:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:08.260 18:28:06 -- nvmf/common.sh@123 -- # set -e 00:19:08.260 18:28:06 -- nvmf/common.sh@124 -- # return 0 00:19:08.260 18:28:06 -- nvmf/common.sh@477 -- # '[' -n 85048 ']' 00:19:08.260 18:28:06 -- nvmf/common.sh@478 -- # killprocess 85048 00:19:08.260 18:28:06 -- common/autotest_common.sh@936 -- # '[' -z 85048 ']' 00:19:08.260 18:28:06 -- common/autotest_common.sh@940 -- # kill -0 85048 00:19:08.260 18:28:06 -- common/autotest_common.sh@941 -- # uname 00:19:08.260 18:28:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.260 18:28:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85048 00:19:08.260 killing process with pid 85048 00:19:08.260 18:28:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:08.260 18:28:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:08.260 18:28:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85048' 00:19:08.260 18:28:06 -- common/autotest_common.sh@955 -- # kill 85048 00:19:08.260 18:28:06 -- common/autotest_common.sh@960 -- # wait 85048 00:19:08.520 18:28:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:08.520 18:28:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:08.520 18:28:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:08.520 18:28:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.520 18:28:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:08.520 18:28:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.520 18:28:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.520 18:28:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.520 18:28:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:08.520 ************************************ 00:19:08.520 END TEST nvmf_timeout 00:19:08.520 ************************************ 00:19:08.521 00:19:08.521 real 0m46.290s 00:19:08.521 user 2m16.748s 00:19:08.521 sys 0m5.353s 00:19:08.521 18:28:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:08.521 18:28:06 -- common/autotest_common.sh@10 -- # set +x 00:19:08.521 18:28:06 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:08.521 18:28:06 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:08.521 18:28:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.521 18:28:06 -- common/autotest_common.sh@10 -- # set +x 00:19:08.521 18:28:06 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:08.521 00:19:08.521 real 10m17.509s 00:19:08.521 user 28m54.167s 00:19:08.521 sys 3m20.879s 00:19:08.521 ************************************ 00:19:08.521 END TEST nvmf_tcp 00:19:08.521 ************************************ 00:19:08.521 18:28:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:08.521 18:28:06 -- common/autotest_common.sh@10 -- # set +x 00:19:08.780 18:28:06 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:19:08.780 18:28:06 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:08.780 18:28:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:08.780 18:28:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.780 18:28:06 -- common/autotest_common.sh@10 -- # set +x 00:19:08.780 ************************************ 00:19:08.780 START TEST nvmf_dif 00:19:08.780 ************************************ 00:19:08.780 18:28:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:08.780 * Looking for test storage... 00:19:08.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:08.780 18:28:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:08.780 18:28:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:08.780 18:28:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:08.780 18:28:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:08.780 18:28:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:08.780 18:28:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:08.780 18:28:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:08.780 18:28:06 -- scripts/common.sh@335 -- # IFS=.-: 00:19:08.780 18:28:06 -- scripts/common.sh@335 -- # read -ra ver1 00:19:08.780 18:28:06 -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.780 18:28:06 -- scripts/common.sh@336 -- # read -ra ver2 00:19:08.780 18:28:06 -- scripts/common.sh@337 -- # local 'op=<' 00:19:08.780 18:28:06 -- scripts/common.sh@339 -- # ver1_l=2 00:19:08.780 18:28:06 -- scripts/common.sh@340 -- # ver2_l=1 00:19:08.780 18:28:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:08.780 18:28:06 -- scripts/common.sh@343 -- # case "$op" in 00:19:08.780 18:28:06 -- scripts/common.sh@344 -- # : 1 00:19:08.780 18:28:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:08.780 18:28:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.780 18:28:06 -- scripts/common.sh@364 -- # decimal 1 00:19:08.781 18:28:06 -- scripts/common.sh@352 -- # local d=1 00:19:08.781 18:28:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.781 18:28:06 -- scripts/common.sh@354 -- # echo 1 00:19:08.781 18:28:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:08.781 18:28:07 -- scripts/common.sh@365 -- # decimal 2 00:19:08.781 18:28:07 -- scripts/common.sh@352 -- # local d=2 00:19:08.781 18:28:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.781 18:28:07 -- scripts/common.sh@354 -- # echo 2 00:19:08.781 18:28:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:08.781 18:28:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:08.781 18:28:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:08.781 18:28:07 -- scripts/common.sh@367 -- # return 0 00:19:08.781 18:28:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.781 18:28:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.781 --rc genhtml_branch_coverage=1 00:19:08.781 --rc genhtml_function_coverage=1 00:19:08.781 --rc genhtml_legend=1 00:19:08.781 --rc geninfo_all_blocks=1 00:19:08.781 --rc geninfo_unexecuted_blocks=1 00:19:08.781 00:19:08.781 ' 00:19:08.781 18:28:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.781 --rc genhtml_branch_coverage=1 00:19:08.781 --rc genhtml_function_coverage=1 00:19:08.781 --rc genhtml_legend=1 00:19:08.781 --rc geninfo_all_blocks=1 00:19:08.781 --rc geninfo_unexecuted_blocks=1 00:19:08.781 00:19:08.781 ' 00:19:08.781 18:28:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.781 --rc genhtml_branch_coverage=1 00:19:08.781 --rc genhtml_function_coverage=1 00:19:08.781 --rc genhtml_legend=1 00:19:08.781 --rc geninfo_all_blocks=1 00:19:08.781 --rc geninfo_unexecuted_blocks=1 00:19:08.781 00:19:08.781 ' 00:19:08.781 18:28:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.781 --rc genhtml_branch_coverage=1 00:19:08.781 --rc genhtml_function_coverage=1 00:19:08.781 --rc genhtml_legend=1 00:19:08.781 --rc geninfo_all_blocks=1 00:19:08.781 --rc geninfo_unexecuted_blocks=1 00:19:08.781 00:19:08.781 ' 00:19:08.781 18:28:07 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.781 18:28:07 -- nvmf/common.sh@7 -- # uname -s 00:19:08.781 18:28:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.781 18:28:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.781 18:28:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.781 18:28:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.781 18:28:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.781 18:28:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.781 18:28:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.781 18:28:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.781 18:28:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.781 18:28:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.781 18:28:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:19:08.781 18:28:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:19:08.781 18:28:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.781 18:28:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.781 18:28:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.781 18:28:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.781 18:28:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.781 18:28:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.781 18:28:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.781 18:28:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.781 18:28:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.781 18:28:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.781 18:28:07 -- paths/export.sh@5 -- # export PATH 00:19:08.781 18:28:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.781 18:28:07 -- nvmf/common.sh@46 -- # : 0 00:19:08.781 18:28:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.781 18:28:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.781 18:28:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.781 18:28:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.781 18:28:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.781 18:28:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.781 18:28:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.781 18:28:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.781 18:28:07 -- target/dif.sh@15 -- # NULL_META=16 00:19:08.781 18:28:07 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:08.781 18:28:07 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:08.781 18:28:07 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:08.781 18:28:07 -- target/dif.sh@135 -- # nvmftestinit 00:19:08.781 18:28:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:08.781 18:28:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.781 18:28:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.781 18:28:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.781 18:28:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.781 18:28:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.781 18:28:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:08.781 18:28:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.040 18:28:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:09.040 18:28:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:09.040 18:28:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:09.040 18:28:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:09.040 18:28:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:09.040 18:28:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:09.040 18:28:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.040 18:28:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.040 18:28:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.040 18:28:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:09.040 18:28:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.040 18:28:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.040 18:28:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.040 18:28:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.040 18:28:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.040 18:28:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.040 18:28:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.040 18:28:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.040 18:28:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:09.040 18:28:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:09.040 Cannot find device "nvmf_tgt_br" 00:19:09.040 18:28:07 -- nvmf/common.sh@154 -- # true 00:19:09.040 18:28:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.040 Cannot find device "nvmf_tgt_br2" 00:19:09.040 18:28:07 -- nvmf/common.sh@155 -- # true 00:19:09.040 18:28:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:09.040 18:28:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:09.040 Cannot find device "nvmf_tgt_br" 00:19:09.040 18:28:07 -- nvmf/common.sh@157 -- # true 00:19:09.040 18:28:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:09.040 Cannot find device "nvmf_tgt_br2" 00:19:09.040 18:28:07 -- nvmf/common.sh@158 -- # true 00:19:09.040 18:28:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:09.040 18:28:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:09.040 18:28:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.040 18:28:07 -- nvmf/common.sh@161 -- # true 00:19:09.040 18:28:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.040 18:28:07 -- nvmf/common.sh@162 -- # true 00:19:09.040 18:28:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.040 18:28:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.040 18:28:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.040 18:28:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.040 18:28:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.040 18:28:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.040 18:28:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.040 18:28:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:09.040 18:28:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:09.040 18:28:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:09.299 18:28:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:09.299 18:28:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:09.299 18:28:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:09.300 18:28:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.300 18:28:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.300 18:28:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.300 18:28:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:09.300 18:28:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:09.300 18:28:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.300 18:28:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.300 18:28:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.300 18:28:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.300 18:28:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.300 18:28:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:09.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:19:09.300 00:19:09.300 --- 10.0.0.2 ping statistics --- 00:19:09.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.300 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:09.300 18:28:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:09.300 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.300 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:09.300 00:19:09.300 --- 10.0.0.3 ping statistics --- 00:19:09.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.300 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:09.300 18:28:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:09.300 00:19:09.300 --- 10.0.0.1 ping statistics --- 00:19:09.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.300 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:09.300 18:28:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.300 18:28:07 -- nvmf/common.sh@421 -- # return 0 00:19:09.300 18:28:07 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:09.300 18:28:07 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:09.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:09.559 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:09.559 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:09.559 18:28:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.559 18:28:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:09.559 18:28:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:09.559 18:28:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.559 18:28:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:09.559 18:28:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:09.817 18:28:07 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:09.817 18:28:07 -- target/dif.sh@137 -- # nvmfappstart 00:19:09.817 18:28:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:09.817 18:28:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.817 18:28:07 -- common/autotest_common.sh@10 -- # set +x 00:19:09.817 18:28:07 -- nvmf/common.sh@469 -- # nvmfpid=85983 00:19:09.817 18:28:07 -- nvmf/common.sh@470 -- # waitforlisten 85983 00:19:09.817 18:28:07 -- common/autotest_common.sh@829 -- # '[' -z 85983 ']' 00:19:09.817 18:28:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:09.817 18:28:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.817 18:28:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.817 18:28:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.817 18:28:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.817 18:28:07 -- common/autotest_common.sh@10 -- # set +x 00:19:09.817 [2024-11-17 18:28:07.883595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:09.817 [2024-11-17 18:28:07.883699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.817 [2024-11-17 18:28:08.025522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.817 [2024-11-17 18:28:08.068528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:09.817 [2024-11-17 18:28:08.068701] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.817 [2024-11-17 18:28:08.068717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.817 [2024-11-17 18:28:08.068729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.817 [2024-11-17 18:28:08.068787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.808 18:28:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.808 18:28:08 -- common/autotest_common.sh@862 -- # return 0 00:19:10.808 18:28:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:10.808 18:28:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 18:28:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.808 18:28:08 -- target/dif.sh@139 -- # create_transport 00:19:10.808 18:28:08 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:10.808 18:28:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 [2024-11-17 18:28:08.953434] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.808 18:28:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 18:28:08 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:10.808 18:28:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:10.808 18:28:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 ************************************ 00:19:10.808 START TEST fio_dif_1_default 00:19:10.808 ************************************ 00:19:10.808 18:28:08 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:19:10.808 18:28:08 -- target/dif.sh@86 -- # create_subsystems 0 00:19:10.808 18:28:08 -- target/dif.sh@28 -- # local sub 00:19:10.808 18:28:08 -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.808 18:28:08 -- target/dif.sh@31 -- # create_subsystem 0 00:19:10.808 18:28:08 -- target/dif.sh@18 -- # local sub_id=0 00:19:10.808 18:28:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:10.808 18:28:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 bdev_null0 00:19:10.808 18:28:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 18:28:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:10.808 18:28:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 18:28:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 18:28:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:10.808 18:28:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 18:28:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 18:28:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:10.808 18:28:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.808 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:19:10.808 [2024-11-17 18:28:08.997501] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.808 18:28:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.808 18:28:09 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:10.808 18:28:09 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:10.808 18:28:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:10.808 18:28:09 -- nvmf/common.sh@520 -- # config=() 00:19:10.808 18:28:09 -- nvmf/common.sh@520 -- # local subsystem config 00:19:10.808 18:28:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.808 18:28:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:10.808 18:28:09 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.808 18:28:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:10.808 { 00:19:10.808 "params": { 00:19:10.808 "name": "Nvme$subsystem", 00:19:10.808 "trtype": "$TEST_TRANSPORT", 00:19:10.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.808 "adrfam": "ipv4", 00:19:10.808 "trsvcid": "$NVMF_PORT", 00:19:10.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.808 "hdgst": ${hdgst:-false}, 00:19:10.808 "ddgst": ${ddgst:-false} 00:19:10.808 }, 00:19:10.808 "method": "bdev_nvme_attach_controller" 00:19:10.808 } 00:19:10.808 EOF 00:19:10.808 )") 00:19:10.808 18:28:09 -- target/dif.sh@82 -- # gen_fio_conf 00:19:10.808 18:28:09 -- target/dif.sh@54 -- # local file 00:19:10.808 18:28:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:10.808 18:28:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.808 18:28:09 -- target/dif.sh@56 -- # cat 00:19:10.808 18:28:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:10.808 18:28:09 -- nvmf/common.sh@542 -- # cat 00:19:10.808 18:28:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.808 18:28:09 -- common/autotest_common.sh@1330 -- # shift 00:19:10.808 18:28:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:10.808 18:28:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.808 18:28:09 -- nvmf/common.sh@544 -- # jq . 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:10.808 18:28:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:10.808 18:28:09 -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.808 18:28:09 -- nvmf/common.sh@545 -- # IFS=, 00:19:10.808 18:28:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:10.808 "params": { 00:19:10.808 "name": "Nvme0", 00:19:10.808 "trtype": "tcp", 00:19:10.808 "traddr": "10.0.0.2", 00:19:10.808 "adrfam": "ipv4", 00:19:10.808 "trsvcid": "4420", 00:19:10.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:10.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:10.808 "hdgst": false, 00:19:10.808 "ddgst": false 00:19:10.808 }, 00:19:10.808 "method": "bdev_nvme_attach_controller" 00:19:10.808 }' 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:10.808 18:28:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:10.808 18:28:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:10.808 18:28:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:10.808 18:28:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:10.808 18:28:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.808 18:28:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.068 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:11.068 fio-3.35 00:19:11.068 Starting 1 thread 00:19:11.327 [2024-11-17 18:28:09.517892] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:11.327 [2024-11-17 18:28:09.518763] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:23.540 00:19:23.540 filename0: (groupid=0, jobs=1): err= 0: pid=86044: Sun Nov 17 18:28:19 2024 00:19:23.540 read: IOPS=9375, BW=36.6MiB/s (38.4MB/s)(366MiB/10001msec) 00:19:23.540 slat (nsec): min=5783, max=77725, avg=8176.29, stdev=3673.32 00:19:23.540 clat (usec): min=316, max=3648, avg=402.52, stdev=49.64 00:19:23.540 lat (usec): min=322, max=3676, avg=410.70, stdev=50.46 00:19:23.540 clat percentiles (usec): 00:19:23.540 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:19:23.540 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 404], 00:19:23.540 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 461], 95.00th=[ 482], 00:19:23.540 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 594], 00:19:23.540 | 99.99th=[ 938] 00:19:23.540 bw ( KiB/s): min=36480, max=39488, per=100.00%, avg=37510.21, stdev=830.45, samples=19 00:19:23.540 iops : min= 9120, max= 9872, avg=9377.53, stdev=207.65, samples=19 00:19:23.540 lat (usec) : 500=97.51%, 750=2.46%, 1000=0.02% 00:19:23.540 lat (msec) : 4=0.01% 00:19:23.540 cpu : usr=85.03%, sys=13.20%, ctx=17, majf=0, minf=8 00:19:23.540 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.540 issued rwts: total=93760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.540 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:23.540 00:19:23.540 Run status group 0 (all jobs): 00:19:23.540 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=366MiB (384MB), run=10001-10001msec 00:19:23.540 18:28:19 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:23.540 18:28:19 -- target/dif.sh@43 -- # local sub 00:19:23.540 18:28:19 -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.540 18:28:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:23.540 18:28:19 -- target/dif.sh@36 -- # local sub_id=0 00:19:23.540 18:28:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:23.540 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.540 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.540 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.540 18:28:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 ************************************ 00:19:23.541 END TEST fio_dif_1_default 00:19:23.541 ************************************ 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 00:19:23.541 real 0m10.834s 00:19:23.541 user 0m9.054s 00:19:23.541 sys 0m1.524s 00:19:23.541 18:28:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 18:28:19 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:23.541 18:28:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:23.541 18:28:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 ************************************ 00:19:23.541 START TEST fio_dif_1_multi_subsystems 00:19:23.541 ************************************ 00:19:23.541 18:28:19 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:19:23.541 18:28:19 -- target/dif.sh@92 -- # local files=1 00:19:23.541 18:28:19 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:23.541 18:28:19 -- target/dif.sh@28 -- # local sub 00:19:23.541 18:28:19 -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.541 18:28:19 -- target/dif.sh@31 -- # create_subsystem 0 00:19:23.541 18:28:19 -- target/dif.sh@18 -- # local sub_id=0 00:19:23.541 18:28:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 bdev_null0 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 [2024-11-17 18:28:19.885776] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.541 18:28:19 -- target/dif.sh@31 -- # create_subsystem 1 00:19:23.541 18:28:19 -- target/dif.sh@18 -- # local sub_id=1 00:19:23.541 18:28:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 bdev_null1 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.541 18:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.541 18:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:23.541 18:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.541 18:28:19 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:23.541 18:28:19 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:23.541 18:28:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:23.541 18:28:19 -- nvmf/common.sh@520 -- # config=() 00:19:23.541 18:28:19 -- nvmf/common.sh@520 -- # local subsystem config 00:19:23.541 18:28:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.541 18:28:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:23.541 18:28:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:23.541 { 00:19:23.541 "params": { 00:19:23.541 "name": "Nvme$subsystem", 00:19:23.541 "trtype": "$TEST_TRANSPORT", 00:19:23.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.541 "adrfam": "ipv4", 00:19:23.541 "trsvcid": "$NVMF_PORT", 00:19:23.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.541 "hdgst": ${hdgst:-false}, 00:19:23.541 "ddgst": ${ddgst:-false} 00:19:23.541 }, 00:19:23.541 "method": "bdev_nvme_attach_controller" 00:19:23.541 } 00:19:23.541 EOF 00:19:23.541 )") 00:19:23.541 18:28:19 -- target/dif.sh@82 -- # gen_fio_conf 00:19:23.541 18:28:19 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.541 18:28:19 -- target/dif.sh@54 -- # local file 00:19:23.541 18:28:19 -- target/dif.sh@56 -- # cat 00:19:23.541 18:28:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:23.541 18:28:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.541 18:28:19 -- nvmf/common.sh@542 -- # cat 00:19:23.541 18:28:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:23.541 18:28:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.541 18:28:19 -- common/autotest_common.sh@1330 -- # shift 00:19:23.541 18:28:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:23.541 18:28:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.541 18:28:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:23.541 18:28:19 -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.541 18:28:19 -- target/dif.sh@73 -- # cat 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:23.541 18:28:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:23.541 18:28:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:23.541 { 00:19:23.541 "params": { 00:19:23.541 "name": "Nvme$subsystem", 00:19:23.541 "trtype": "$TEST_TRANSPORT", 00:19:23.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.541 "adrfam": "ipv4", 00:19:23.541 "trsvcid": "$NVMF_PORT", 00:19:23.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.541 "hdgst": ${hdgst:-false}, 00:19:23.541 "ddgst": ${ddgst:-false} 00:19:23.541 }, 00:19:23.541 "method": "bdev_nvme_attach_controller" 00:19:23.541 } 00:19:23.541 EOF 00:19:23.541 )") 00:19:23.541 18:28:19 -- nvmf/common.sh@542 -- # cat 00:19:23.541 18:28:19 -- target/dif.sh@72 -- # (( file++ )) 00:19:23.541 18:28:19 -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.541 18:28:19 -- nvmf/common.sh@544 -- # jq . 00:19:23.541 18:28:19 -- nvmf/common.sh@545 -- # IFS=, 00:19:23.541 18:28:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:23.541 "params": { 00:19:23.541 "name": "Nvme0", 00:19:23.541 "trtype": "tcp", 00:19:23.541 "traddr": "10.0.0.2", 00:19:23.541 "adrfam": "ipv4", 00:19:23.541 "trsvcid": "4420", 00:19:23.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:23.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:23.541 "hdgst": false, 00:19:23.541 "ddgst": false 00:19:23.541 }, 00:19:23.541 "method": "bdev_nvme_attach_controller" 00:19:23.541 },{ 00:19:23.541 "params": { 00:19:23.541 "name": "Nvme1", 00:19:23.541 "trtype": "tcp", 00:19:23.541 "traddr": "10.0.0.2", 00:19:23.541 "adrfam": "ipv4", 00:19:23.541 "trsvcid": "4420", 00:19:23.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.541 "hdgst": false, 00:19:23.541 "ddgst": false 00:19:23.541 }, 00:19:23.541 "method": "bdev_nvme_attach_controller" 00:19:23.541 }' 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:23.541 18:28:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:23.541 18:28:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:23.541 18:28:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:23.541 18:28:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:23.541 18:28:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.541 18:28:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.541 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:23.541 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:23.541 fio-3.35 00:19:23.541 Starting 2 threads 00:19:23.542 [2024-11-17 18:28:20.543781] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:23.542 [2024-11-17 18:28:20.543849] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:33.524 00:19:33.524 filename0: (groupid=0, jobs=1): err= 0: pid=86208: Sun Nov 17 18:28:30 2024 00:19:33.524 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:33.524 slat (nsec): min=6413, max=57895, avg=13301.18, stdev=5070.48 00:19:33.524 clat (usec): min=573, max=1394, avg=753.48, stdev=64.05 00:19:33.524 lat (usec): min=579, max=1406, avg=766.78, stdev=64.82 00:19:33.524 clat percentiles (usec): 00:19:33.524 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 701], 00:19:33.524 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:19:33.524 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:19:33.524 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 1012], 00:19:33.524 | 99.99th=[ 1369] 00:19:33.524 bw ( KiB/s): min=19584, max=20800, per=49.99%, avg=20277.89, stdev=346.14, samples=19 00:19:33.524 iops : min= 4896, max= 5200, avg=5069.47, stdev=86.54, samples=19 00:19:33.524 lat (usec) : 750=52.82%, 1000=47.13% 00:19:33.524 lat (msec) : 2=0.05% 00:19:33.524 cpu : usr=89.48%, sys=9.15%, ctx=15, majf=0, minf=0 00:19:33.524 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.524 issued rwts: total=50704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.524 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:33.524 filename1: (groupid=0, jobs=1): err= 0: pid=86209: Sun Nov 17 18:28:30 2024 00:19:33.524 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:33.524 slat (nsec): min=6332, max=60536, avg=13820.92, stdev=5299.00 00:19:33.524 clat (usec): min=592, max=1392, avg=750.42, stdev=60.41 00:19:33.524 lat (usec): min=601, max=1407, avg=764.24, stdev=61.38 00:19:33.524 clat percentiles (usec): 00:19:33.524 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 701], 00:19:33.524 | 30.00th=[ 717], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:19:33.524 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:19:33.524 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 996], 00:19:33.524 | 99.99th=[ 1369] 00:19:33.524 bw ( KiB/s): min=19584, max=20800, per=49.99%, avg=20277.89, stdev=341.68, samples=19 00:19:33.524 iops : min= 4896, max= 5200, avg=5069.47, stdev=85.42, samples=19 00:19:33.524 lat (usec) : 750=55.69%, 1000=44.26% 00:19:33.524 lat (msec) : 2=0.05% 00:19:33.524 cpu : usr=89.96%, sys=8.65%, ctx=6, majf=0, minf=0 00:19:33.524 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.524 issued rwts: total=50704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.524 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:33.524 00:19:33.524 Run status group 0 (all jobs): 00:19:33.524 READ: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=396MiB (415MB), run=10001-10001msec 00:19:33.524 18:28:30 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:33.524 18:28:30 -- target/dif.sh@43 -- # local sub 00:19:33.524 18:28:30 -- target/dif.sh@45 -- # for sub in "$@" 00:19:33.524 18:28:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:33.524 18:28:30 -- target/dif.sh@36 -- # local sub_id=0 00:19:33.524 18:28:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@45 -- # for sub in "$@" 00:19:33.524 18:28:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:33.524 18:28:30 -- target/dif.sh@36 -- # local sub_id=1 00:19:33.524 18:28:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 ************************************ 00:19:33.524 END TEST fio_dif_1_multi_subsystems 00:19:33.524 ************************************ 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 00:19:33.524 real 0m10.996s 00:19:33.524 user 0m18.613s 00:19:33.524 sys 0m2.017s 00:19:33.524 18:28:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 18:28:30 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:33.524 18:28:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:33.524 18:28:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 ************************************ 00:19:33.524 START TEST fio_dif_rand_params 00:19:33.524 ************************************ 00:19:33.524 18:28:30 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:33.524 18:28:30 -- target/dif.sh@100 -- # local NULL_DIF 00:19:33.524 18:28:30 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:33.524 18:28:30 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:33.524 18:28:30 -- target/dif.sh@103 -- # bs=128k 00:19:33.524 18:28:30 -- target/dif.sh@103 -- # numjobs=3 00:19:33.524 18:28:30 -- target/dif.sh@103 -- # iodepth=3 00:19:33.524 18:28:30 -- target/dif.sh@103 -- # runtime=5 00:19:33.524 18:28:30 -- target/dif.sh@105 -- # create_subsystems 0 00:19:33.524 18:28:30 -- target/dif.sh@28 -- # local sub 00:19:33.524 18:28:30 -- target/dif.sh@30 -- # for sub in "$@" 00:19:33.524 18:28:30 -- target/dif.sh@31 -- # create_subsystem 0 00:19:33.524 18:28:30 -- target/dif.sh@18 -- # local sub_id=0 00:19:33.524 18:28:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 bdev_null0 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:33.524 18:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.524 18:28:30 -- common/autotest_common.sh@10 -- # set +x 00:19:33.524 [2024-11-17 18:28:30.944383] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.524 18:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.524 18:28:30 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:33.524 18:28:30 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:33.524 18:28:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:33.524 18:28:30 -- nvmf/common.sh@520 -- # config=() 00:19:33.524 18:28:30 -- nvmf/common.sh@520 -- # local subsystem config 00:19:33.524 18:28:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:33.524 18:28:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:33.524 18:28:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:33.524 { 00:19:33.524 "params": { 00:19:33.524 "name": "Nvme$subsystem", 00:19:33.524 "trtype": "$TEST_TRANSPORT", 00:19:33.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.525 "adrfam": "ipv4", 00:19:33.525 "trsvcid": "$NVMF_PORT", 00:19:33.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.525 "hdgst": ${hdgst:-false}, 00:19:33.525 "ddgst": ${ddgst:-false} 00:19:33.525 }, 00:19:33.525 "method": "bdev_nvme_attach_controller" 00:19:33.525 } 00:19:33.525 EOF 00:19:33.525 )") 00:19:33.525 18:28:30 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:33.525 18:28:30 -- target/dif.sh@82 -- # gen_fio_conf 00:19:33.525 18:28:30 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:33.525 18:28:30 -- target/dif.sh@54 -- # local file 00:19:33.525 18:28:30 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.525 18:28:30 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:33.525 18:28:30 -- target/dif.sh@56 -- # cat 00:19:33.525 18:28:30 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.525 18:28:30 -- common/autotest_common.sh@1330 -- # shift 00:19:33.525 18:28:30 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:33.525 18:28:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.525 18:28:30 -- nvmf/common.sh@542 -- # cat 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:33.525 18:28:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:33.525 18:28:30 -- target/dif.sh@72 -- # (( file <= files )) 00:19:33.525 18:28:30 -- nvmf/common.sh@544 -- # jq . 00:19:33.525 18:28:30 -- nvmf/common.sh@545 -- # IFS=, 00:19:33.525 18:28:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:33.525 "params": { 00:19:33.525 "name": "Nvme0", 00:19:33.525 "trtype": "tcp", 00:19:33.525 "traddr": "10.0.0.2", 00:19:33.525 "adrfam": "ipv4", 00:19:33.525 "trsvcid": "4420", 00:19:33.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:33.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:33.525 "hdgst": false, 00:19:33.525 "ddgst": false 00:19:33.525 }, 00:19:33.525 "method": "bdev_nvme_attach_controller" 00:19:33.525 }' 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:33.525 18:28:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:33.525 18:28:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:33.525 18:28:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:33.525 18:28:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:33.525 18:28:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:33.525 18:28:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:33.525 18:28:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:33.525 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:33.525 ... 00:19:33.525 fio-3.35 00:19:33.525 Starting 3 threads 00:19:33.525 [2024-11-17 18:28:31.474597] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:33.525 [2024-11-17 18:28:31.474686] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:38.797 00:19:38.797 filename0: (groupid=0, jobs=1): err= 0: pid=86366: Sun Nov 17 18:28:36 2024 00:19:38.797 read: IOPS=267, BW=33.4MiB/s (35.1MB/s)(167MiB/5001msec) 00:19:38.797 slat (nsec): min=6766, max=56271, avg=15099.16, stdev=5795.19 00:19:38.797 clat (usec): min=10289, max=14446, avg=11178.23, stdev=524.66 00:19:38.797 lat (usec): min=10302, max=14472, avg=11193.33, stdev=525.39 00:19:38.797 clat percentiles (usec): 00:19:38.797 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:19:38.797 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:19:38.797 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12125], 00:19:38.797 | 99.00th=[12387], 99.50th=[12518], 99.90th=[14484], 99.95th=[14484], 00:19:38.797 | 99.99th=[14484] 00:19:38.797 bw ( KiB/s): min=33724, max=35328, per=33.37%, avg=34296.44, stdev=551.48, samples=9 00:19:38.797 iops : min= 263, max= 276, avg=267.89, stdev= 4.37, samples=9 00:19:38.797 lat (msec) : 20=100.00% 00:19:38.797 cpu : usr=90.76%, sys=8.66%, ctx=11, majf=0, minf=8 00:19:38.797 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.797 issued rwts: total=1338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.797 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:38.797 filename0: (groupid=0, jobs=1): err= 0: pid=86367: Sun Nov 17 18:28:36 2024 00:19:38.797 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(168MiB/5007msec) 00:19:38.797 slat (nsec): min=7551, max=57679, avg=15915.86, stdev=5327.19 00:19:38.797 clat (usec): min=8627, max=12504, avg=11164.83, stdev=517.04 00:19:38.797 lat (usec): min=8641, max=12517, avg=11180.75, stdev=517.62 00:19:38.797 clat percentiles (usec): 00:19:38.797 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:19:38.797 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:19:38.797 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12125], 00:19:38.797 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:19:38.797 | 99.99th=[12518] 00:19:38.797 bw ( KiB/s): min=33724, max=35328, per=33.32%, avg=34246.00, stdev=543.86, samples=10 00:19:38.797 iops : min= 263, max= 276, avg=267.50, stdev= 4.30, samples=10 00:19:38.797 lat (msec) : 10=0.22%, 20=99.78% 00:19:38.797 cpu : usr=92.19%, sys=7.23%, ctx=8, majf=0, minf=9 00:19:38.797 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.797 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.797 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:38.797 filename0: (groupid=0, jobs=1): err= 0: pid=86368: Sun Nov 17 18:28:36 2024 00:19:38.797 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(168MiB/5007msec) 00:19:38.797 slat (nsec): min=6748, max=51860, avg=15980.12, stdev=5448.74 00:19:38.797 clat (usec): min=8613, max=12499, avg=11163.19, stdev=515.27 00:19:38.797 lat (usec): min=8621, max=12520, avg=11179.17, stdev=515.95 00:19:38.797 clat percentiles (usec): 00:19:38.797 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:19:38.797 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:19:38.797 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12125], 00:19:38.797 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:19:38.797 | 99.99th=[12518] 00:19:38.797 bw ( KiB/s): min=33724, max=35328, per=33.32%, avg=34246.00, stdev=543.86, samples=10 00:19:38.797 iops : min= 263, max= 276, avg=267.50, stdev= 4.30, samples=10 00:19:38.797 lat (msec) : 10=0.22%, 20=99.78% 00:19:38.797 cpu : usr=91.85%, sys=7.57%, ctx=45, majf=0, minf=9 00:19:38.797 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.798 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:38.798 00:19:38.798 Run status group 0 (all jobs): 00:19:38.798 READ: bw=100MiB/s (105MB/s), 33.4MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=503MiB (527MB), run=5001-5007msec 00:19:38.798 18:28:36 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:38.798 18:28:36 -- target/dif.sh@43 -- # local sub 00:19:38.798 18:28:36 -- target/dif.sh@45 -- # for sub in "$@" 00:19:38.798 18:28:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:38.798 18:28:36 -- target/dif.sh@36 -- # local sub_id=0 00:19:38.798 18:28:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:38.798 18:28:36 -- target/dif.sh@109 -- # bs=4k 00:19:38.798 18:28:36 -- target/dif.sh@109 -- # numjobs=8 00:19:38.798 18:28:36 -- target/dif.sh@109 -- # iodepth=16 00:19:38.798 18:28:36 -- target/dif.sh@109 -- # runtime= 00:19:38.798 18:28:36 -- target/dif.sh@109 -- # files=2 00:19:38.798 18:28:36 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:38.798 18:28:36 -- target/dif.sh@28 -- # local sub 00:19:38.798 18:28:36 -- target/dif.sh@30 -- # for sub in "$@" 00:19:38.798 18:28:36 -- target/dif.sh@31 -- # create_subsystem 0 00:19:38.798 18:28:36 -- target/dif.sh@18 -- # local sub_id=0 00:19:38.798 18:28:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 bdev_null0 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 [2024-11-17 18:28:36.794249] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@30 -- # for sub in "$@" 00:19:38.798 18:28:36 -- target/dif.sh@31 -- # create_subsystem 1 00:19:38.798 18:28:36 -- target/dif.sh@18 -- # local sub_id=1 00:19:38.798 18:28:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 bdev_null1 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@30 -- # for sub in "$@" 00:19:38.798 18:28:36 -- target/dif.sh@31 -- # create_subsystem 2 00:19:38.798 18:28:36 -- target/dif.sh@18 -- # local sub_id=2 00:19:38.798 18:28:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 bdev_null2 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:38.798 18:28:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.798 18:28:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.798 18:28:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.798 18:28:36 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:38.798 18:28:36 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:38.798 18:28:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:38.798 18:28:36 -- nvmf/common.sh@520 -- # config=() 00:19:38.798 18:28:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.798 18:28:36 -- nvmf/common.sh@520 -- # local subsystem config 00:19:38.798 18:28:36 -- target/dif.sh@82 -- # gen_fio_conf 00:19:38.798 18:28:36 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.798 18:28:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:38.798 18:28:36 -- target/dif.sh@54 -- # local file 00:19:38.798 18:28:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:38.798 { 00:19:38.798 "params": { 00:19:38.798 "name": "Nvme$subsystem", 00:19:38.798 "trtype": "$TEST_TRANSPORT", 00:19:38.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.798 "adrfam": "ipv4", 00:19:38.798 "trsvcid": "$NVMF_PORT", 00:19:38.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.798 "hdgst": ${hdgst:-false}, 00:19:38.798 "ddgst": ${ddgst:-false} 00:19:38.798 }, 00:19:38.798 "method": "bdev_nvme_attach_controller" 00:19:38.798 } 00:19:38.798 EOF 00:19:38.798 )") 00:19:38.798 18:28:36 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:38.798 18:28:36 -- target/dif.sh@56 -- # cat 00:19:38.798 18:28:36 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:38.798 18:28:36 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:38.798 18:28:36 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.798 18:28:36 -- common/autotest_common.sh@1330 -- # shift 00:19:38.798 18:28:36 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:38.798 18:28:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.798 18:28:36 -- nvmf/common.sh@542 -- # cat 00:19:38.799 18:28:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:38.799 18:28:36 -- target/dif.sh@72 -- # (( file <= files )) 00:19:38.799 18:28:36 -- target/dif.sh@73 -- # cat 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:38.799 18:28:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:38.799 18:28:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:38.799 { 00:19:38.799 "params": { 00:19:38.799 "name": "Nvme$subsystem", 00:19:38.799 "trtype": "$TEST_TRANSPORT", 00:19:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.799 "adrfam": "ipv4", 00:19:38.799 "trsvcid": "$NVMF_PORT", 00:19:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.799 "hdgst": ${hdgst:-false}, 00:19:38.799 "ddgst": ${ddgst:-false} 00:19:38.799 }, 00:19:38.799 "method": "bdev_nvme_attach_controller" 00:19:38.799 } 00:19:38.799 EOF 00:19:38.799 )") 00:19:38.799 18:28:36 -- target/dif.sh@72 -- # (( file++ )) 00:19:38.799 18:28:36 -- target/dif.sh@72 -- # (( file <= files )) 00:19:38.799 18:28:36 -- target/dif.sh@73 -- # cat 00:19:38.799 18:28:36 -- nvmf/common.sh@542 -- # cat 00:19:38.799 18:28:36 -- target/dif.sh@72 -- # (( file++ )) 00:19:38.799 18:28:36 -- target/dif.sh@72 -- # (( file <= files )) 00:19:38.799 18:28:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:38.799 18:28:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:38.799 { 00:19:38.799 "params": { 00:19:38.799 "name": "Nvme$subsystem", 00:19:38.799 "trtype": "$TEST_TRANSPORT", 00:19:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.799 "adrfam": "ipv4", 00:19:38.799 "trsvcid": "$NVMF_PORT", 00:19:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.799 "hdgst": ${hdgst:-false}, 00:19:38.799 "ddgst": ${ddgst:-false} 00:19:38.799 }, 00:19:38.799 "method": "bdev_nvme_attach_controller" 00:19:38.799 } 00:19:38.799 EOF 00:19:38.799 )") 00:19:38.799 18:28:36 -- nvmf/common.sh@542 -- # cat 00:19:38.799 18:28:36 -- nvmf/common.sh@544 -- # jq . 00:19:38.799 18:28:36 -- nvmf/common.sh@545 -- # IFS=, 00:19:38.799 18:28:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:38.799 "params": { 00:19:38.799 "name": "Nvme0", 00:19:38.799 "trtype": "tcp", 00:19:38.799 "traddr": "10.0.0.2", 00:19:38.799 "adrfam": "ipv4", 00:19:38.799 "trsvcid": "4420", 00:19:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:38.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:38.799 "hdgst": false, 00:19:38.799 "ddgst": false 00:19:38.799 }, 00:19:38.799 "method": "bdev_nvme_attach_controller" 00:19:38.799 },{ 00:19:38.799 "params": { 00:19:38.799 "name": "Nvme1", 00:19:38.799 "trtype": "tcp", 00:19:38.799 "traddr": "10.0.0.2", 00:19:38.799 "adrfam": "ipv4", 00:19:38.799 "trsvcid": "4420", 00:19:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.799 "hdgst": false, 00:19:38.799 "ddgst": false 00:19:38.799 }, 00:19:38.799 "method": "bdev_nvme_attach_controller" 00:19:38.799 },{ 00:19:38.799 "params": { 00:19:38.799 "name": "Nvme2", 00:19:38.799 "trtype": "tcp", 00:19:38.799 "traddr": "10.0.0.2", 00:19:38.799 "adrfam": "ipv4", 00:19:38.799 "trsvcid": "4420", 00:19:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:38.799 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:38.799 "hdgst": false, 00:19:38.799 "ddgst": false 00:19:38.799 }, 00:19:38.799 "method": "bdev_nvme_attach_controller" 00:19:38.799 }' 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:38.799 18:28:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:38.799 18:28:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:38.799 18:28:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:38.799 18:28:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:38.799 18:28:36 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:38.799 18:28:36 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.059 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:39.059 ... 00:19:39.059 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:39.059 ... 00:19:39.059 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:39.059 ... 00:19:39.059 fio-3.35 00:19:39.059 Starting 24 threads 00:19:39.318 [2024-11-17 18:28:37.527532] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:39.318 [2024-11-17 18:28:37.527599] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:51.549 00:19:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=86463: Sun Nov 17 18:28:47 2024 00:19:51.549 read: IOPS=214, BW=857KiB/s (877kB/s)(8604KiB/10041msec) 00:19:51.549 slat (usec): min=4, max=9039, avg=28.35, stdev=312.21 00:19:51.549 clat (usec): min=1340, max=135133, avg=74519.48, stdev=22700.80 00:19:51.549 lat (usec): min=1350, max=135148, avg=74547.83, stdev=22702.74 00:19:51.549 clat percentiles (msec): 00:19:51.549 | 1.00th=[ 6], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:19:51.549 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:19:51.549 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:19:51.549 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 129], 99.95th=[ 132], 00:19:51.549 | 99.99th=[ 136] 00:19:51.549 bw ( KiB/s): min= 664, max= 1312, per=4.31%, avg=854.00, stdev=151.83, samples=20 00:19:51.549 iops : min= 166, max= 328, avg=213.50, stdev=37.96, samples=20 00:19:51.549 lat (msec) : 2=0.09%, 4=0.84%, 10=2.14%, 50=10.51%, 100=71.36% 00:19:51.549 lat (msec) : 250=15.06% 00:19:51.549 cpu : usr=43.25%, sys=2.29%, ctx=1272, majf=0, minf=0 00:19:51.549 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=86464: Sun Nov 17 18:28:47 2024 00:19:51.549 read: IOPS=208, BW=835KiB/s (855kB/s)(8380KiB/10033msec) 00:19:51.549 slat (usec): min=4, max=4033, avg=16.20, stdev=88.05 00:19:51.549 clat (msec): min=33, max=137, avg=76.53, stdev=19.48 00:19:51.549 lat (msec): min=33, max=137, avg=76.55, stdev=19.48 00:19:51.549 clat percentiles (msec): 00:19:51.549 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:19:51.549 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 80], 00:19:51.549 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:19:51.549 | 99.00th=[ 116], 99.50th=[ 116], 99.90th=[ 130], 99.95th=[ 132], 00:19:51.549 | 99.99th=[ 138] 00:19:51.549 bw ( KiB/s): min= 688, max= 1024, per=4.20%, avg=831.60, stdev=99.60, samples=20 00:19:51.549 iops : min= 172, max= 256, avg=207.90, stdev=24.90, samples=20 00:19:51.549 lat (msec) : 50=11.69%, 100=74.46%, 250=13.84% 00:19:51.549 cpu : usr=40.90%, sys=2.04%, ctx=1202, majf=0, minf=9 00:19:51.549 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=86465: Sun Nov 17 18:28:47 2024 00:19:51.549 read: IOPS=190, BW=760KiB/s (778kB/s)(7620KiB/10026msec) 00:19:51.549 slat (usec): min=4, max=8033, avg=24.82, stdev=275.35 00:19:51.549 clat (msec): min=35, max=153, avg=83.97, stdev=24.93 00:19:51.549 lat (msec): min=35, max=153, avg=83.99, stdev=24.93 00:19:51.549 clat percentiles (msec): 00:19:51.549 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:19:51.549 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 92], 00:19:51.549 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 120], 95.00th=[ 132], 00:19:51.549 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:19:51.549 | 99.99th=[ 155] 00:19:51.549 bw ( KiB/s): min= 512, max= 1024, per=3.83%, avg=758.00, stdev=171.62, samples=20 00:19:51.549 iops : min= 128, max= 256, avg=189.50, stdev=42.90, samples=20 00:19:51.549 lat (msec) : 50=11.34%, 100=62.78%, 250=25.88% 00:19:51.549 cpu : usr=33.10%, sys=1.87%, ctx=915, majf=0, minf=9 00:19:51.549 IO depths : 1=0.1%, 2=2.0%, 4=8.4%, 8=74.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 complete : 0=0.0%, 4=90.0%, 8=8.2%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=86466: Sun Nov 17 18:28:47 2024 00:19:51.549 read: IOPS=225, BW=902KiB/s (923kB/s)(9016KiB/10001msec) 00:19:51.549 slat (usec): min=4, max=11027, avg=23.58, stdev=268.55 00:19:51.549 clat (usec): min=932, max=183913, avg=70886.47, stdev=24748.08 00:19:51.549 lat (usec): min=940, max=183927, avg=70910.05, stdev=24746.23 00:19:51.549 clat percentiles (msec): 00:19:51.549 | 1.00th=[ 3], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 50], 00:19:51.549 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 74], 00:19:51.549 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:19:51.549 | 99.00th=[ 116], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 184], 00:19:51.549 | 99.99th=[ 184] 00:19:51.549 bw ( KiB/s): min= 552, max= 1072, per=4.35%, avg=860.89, stdev=141.24, samples=19 00:19:51.549 iops : min= 138, max= 268, avg=215.21, stdev=35.32, samples=19 00:19:51.549 lat (usec) : 1000=0.18% 00:19:51.549 lat (msec) : 2=0.40%, 4=0.89%, 10=1.55%, 20=0.27%, 50=18.90% 00:19:51.549 lat (msec) : 100=63.89%, 250=13.93% 00:19:51.549 cpu : usr=42.94%, sys=2.24%, ctx=1397, majf=0, minf=9 00:19:51.549 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=86467: Sun Nov 17 18:28:47 2024 00:19:51.549 read: IOPS=211, BW=847KiB/s (868kB/s)(8480KiB/10009msec) 00:19:51.549 slat (usec): min=4, max=8029, avg=25.97, stdev=301.29 00:19:51.549 clat (msec): min=12, max=174, avg=75.42, stdev=23.11 00:19:51.549 lat (msec): min=12, max=174, avg=75.44, stdev=23.12 00:19:51.549 clat percentiles (msec): 00:19:51.549 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:19:51.549 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:51.549 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 108], 00:19:51.549 | 99.00th=[ 132], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 176], 00:19:51.549 | 99.99th=[ 176] 00:19:51.549 bw ( KiB/s): min= 507, max= 1080, per=4.21%, avg=832.63, stdev=173.48, samples=19 00:19:51.549 iops : min= 126, max= 270, avg=208.11, stdev=43.43, samples=19 00:19:51.549 lat (msec) : 20=0.28%, 50=20.66%, 100=63.54%, 250=15.52% 00:19:51.549 cpu : usr=31.27%, sys=1.81%, ctx=865, majf=0, minf=9 00:19:51.549 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=80.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:51.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.549 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.549 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.549 filename0: (groupid=0, jobs=1): err= 0: pid=86468: Sun Nov 17 18:28:47 2024 00:19:51.549 read: IOPS=207, BW=831KiB/s (851kB/s)(8320KiB/10015msec) 00:19:51.549 slat (usec): min=4, max=4028, avg=25.92, stdev=203.48 00:19:51.549 clat (msec): min=15, max=180, avg=76.88, stdev=23.16 00:19:51.549 lat (msec): min=15, max=180, avg=76.91, stdev=23.15 00:19:51.549 clat percentiles (msec): 00:19:51.549 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 55], 00:19:51.549 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 82], 00:19:51.549 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 112], 00:19:51.549 | 99.00th=[ 134], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 182], 00:19:51.549 | 99.99th=[ 182] 00:19:51.549 bw ( KiB/s): min= 513, max= 1065, per=4.18%, avg=827.70, stdev=177.89, samples=20 00:19:51.549 iops : min= 128, max= 266, avg=206.90, stdev=44.48, samples=20 00:19:51.550 lat (msec) : 20=0.14%, 50=16.73%, 100=65.62%, 250=17.50% 00:19:51.550 cpu : usr=43.20%, sys=2.53%, ctx=1339, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename0: (groupid=0, jobs=1): err= 0: pid=86469: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=202, BW=812KiB/s (831kB/s)(8144KiB/10030msec) 00:19:51.550 slat (nsec): min=3683, max=37504, avg=14263.95, stdev=4931.92 00:19:51.550 clat (msec): min=33, max=132, avg=78.67, stdev=20.28 00:19:51.550 lat (msec): min=33, max=132, avg=78.69, stdev=20.28 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:51.550 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 85], 00:19:51.550 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:19:51.550 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:19:51.550 | 99.99th=[ 132] 00:19:51.550 bw ( KiB/s): min= 640, max= 1048, per=4.10%, avg=810.40, stdev=121.55, samples=20 00:19:51.550 iops : min= 160, max= 262, avg=202.60, stdev=30.39, samples=20 00:19:51.550 lat (msec) : 50=13.31%, 100=73.08%, 250=13.61% 00:19:51.550 cpu : usr=31.39%, sys=1.75%, ctx=899, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename0: (groupid=0, jobs=1): err= 0: pid=86470: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=200, BW=802KiB/s (821kB/s)(8028KiB/10014msec) 00:19:51.550 slat (usec): min=3, max=4025, avg=18.65, stdev=126.68 00:19:51.550 clat (msec): min=33, max=182, avg=79.71, stdev=24.07 00:19:51.550 lat (msec): min=33, max=182, avg=79.73, stdev=24.06 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:19:51.550 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 87], 00:19:51.550 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 117], 00:19:51.550 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 182], 00:19:51.550 | 99.99th=[ 182] 00:19:51.550 bw ( KiB/s): min= 512, max= 1040, per=4.04%, avg=798.45, stdev=177.84, samples=20 00:19:51.550 iops : min= 128, max= 260, avg=199.60, stdev=44.45, samples=20 00:19:51.550 lat (msec) : 50=15.25%, 100=64.47%, 250=20.28% 00:19:51.550 cpu : usr=36.33%, sys=1.92%, ctx=1361, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename1: (groupid=0, jobs=1): err= 0: pid=86471: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=207, BW=830KiB/s (850kB/s)(8332KiB/10036msec) 00:19:51.550 slat (usec): min=3, max=6026, avg=20.91, stdev=181.00 00:19:51.550 clat (msec): min=9, max=144, avg=76.93, stdev=21.72 00:19:51.550 lat (msec): min=9, max=144, avg=76.95, stdev=21.71 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 61], 00:19:51.550 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:19:51.550 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 110], 00:19:51.550 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 144], 00:19:51.550 | 99.99th=[ 144] 00:19:51.550 bw ( KiB/s): min= 616, max= 1024, per=4.18%, avg=826.80, stdev=134.22, samples=20 00:19:51.550 iops : min= 154, max= 256, avg=206.70, stdev=33.56, samples=20 00:19:51.550 lat (msec) : 10=0.77%, 20=0.77%, 50=11.47%, 100=71.39%, 250=15.60% 00:19:51.550 cpu : usr=41.23%, sys=2.45%, ctx=1388, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename1: (groupid=0, jobs=1): err= 0: pid=86472: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=205, BW=822KiB/s (842kB/s)(8228KiB/10007msec) 00:19:51.550 slat (usec): min=3, max=4471, avg=18.76, stdev=132.30 00:19:51.550 clat (msec): min=11, max=181, avg=77.74, stdev=24.44 00:19:51.550 lat (msec): min=11, max=182, avg=77.76, stdev=24.43 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 54], 00:19:51.550 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 83], 00:19:51.550 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 120], 00:19:51.550 | 99.00th=[ 132], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 182], 00:19:51.550 | 99.99th=[ 182] 00:19:51.550 bw ( KiB/s): min= 509, max= 1032, per=4.07%, avg=805.00, stdev=174.35, samples=19 00:19:51.550 iops : min= 127, max= 258, avg=201.21, stdev=43.58, samples=19 00:19:51.550 lat (msec) : 20=0.29%, 50=16.82%, 100=64.17%, 250=18.72% 00:19:51.550 cpu : usr=36.00%, sys=2.06%, ctx=1335, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=78.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename1: (groupid=0, jobs=1): err= 0: pid=86473: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=209, BW=836KiB/s (856kB/s)(8396KiB/10039msec) 00:19:51.550 slat (usec): min=4, max=8037, avg=21.41, stdev=212.89 00:19:51.550 clat (msec): min=7, max=150, avg=76.32, stdev=21.35 00:19:51.550 lat (msec): min=7, max=150, avg=76.35, stdev=21.35 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 14], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 61], 00:19:51.550 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:19:51.550 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 109], 00:19:51.550 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 130], 99.95th=[ 133], 00:19:51.550 | 99.99th=[ 150] 00:19:51.550 bw ( KiB/s): min= 640, max= 1048, per=4.23%, avg=836.00, stdev=116.38, samples=20 00:19:51.550 iops : min= 160, max= 262, avg=209.00, stdev=29.10, samples=20 00:19:51.550 lat (msec) : 10=0.76%, 20=0.76%, 50=11.24%, 100=72.51%, 250=14.72% 00:19:51.550 cpu : usr=41.47%, sys=2.21%, ctx=1303, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename1: (groupid=0, jobs=1): err= 0: pid=86474: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=198, BW=796KiB/s (815kB/s)(7988KiB/10039msec) 00:19:51.550 slat (usec): min=3, max=4025, avg=18.45, stdev=126.96 00:19:51.550 clat (msec): min=8, max=148, avg=80.22, stdev=25.20 00:19:51.550 lat (msec): min=8, max=148, avg=80.24, stdev=25.20 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:19:51.550 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 87], 00:19:51.550 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 126], 00:19:51.550 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:19:51.550 | 99.99th=[ 148] 00:19:51.550 bw ( KiB/s): min= 512, max= 1080, per=4.02%, avg=795.25, stdev=182.17, samples=20 00:19:51.550 iops : min= 128, max= 270, avg=198.80, stdev=45.53, samples=20 00:19:51.550 lat (msec) : 10=0.80%, 20=0.80%, 50=13.42%, 100=61.94%, 250=23.03% 00:19:51.550 cpu : usr=45.36%, sys=2.54%, ctx=1350, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=73.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=89.8%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename1: (groupid=0, jobs=1): err= 0: pid=86475: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=203, BW=814KiB/s (833kB/s)(8168KiB/10040msec) 00:19:51.550 slat (usec): min=3, max=8024, avg=24.52, stdev=306.91 00:19:51.550 clat (msec): min=12, max=143, avg=78.48, stdev=20.78 00:19:51.550 lat (msec): min=12, max=143, avg=78.51, stdev=20.78 00:19:51.550 clat percentiles (msec): 00:19:51.550 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:19:51.550 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:19:51.550 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 108], 00:19:51.550 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 144], 00:19:51.550 | 99.99th=[ 144] 00:19:51.550 bw ( KiB/s): min= 616, max= 992, per=4.10%, avg=810.45, stdev=113.84, samples=20 00:19:51.550 iops : min= 154, max= 248, avg=202.60, stdev=28.45, samples=20 00:19:51.550 lat (msec) : 20=1.57%, 50=8.52%, 100=74.73%, 250=15.18% 00:19:51.550 cpu : usr=31.38%, sys=1.82%, ctx=894, majf=0, minf=9 00:19:51.550 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:19:51.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 complete : 0=0.0%, 4=88.2%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.550 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.550 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.550 filename1: (groupid=0, jobs=1): err= 0: pid=86476: Sun Nov 17 18:28:47 2024 00:19:51.550 read: IOPS=203, BW=814KiB/s (833kB/s)(8164KiB/10033msec) 00:19:51.550 slat (usec): min=3, max=8025, avg=18.38, stdev=177.37 00:19:51.550 clat (msec): min=32, max=136, avg=78.52, stdev=20.85 00:19:51.550 lat (msec): min=32, max=136, avg=78.54, stdev=20.85 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:51.551 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:19:51.551 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 110], 00:19:51.551 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 138], 00:19:51.551 | 99.99th=[ 138] 00:19:51.551 bw ( KiB/s): min= 640, max= 976, per=4.10%, avg=810.00, stdev=130.78, samples=20 00:19:51.551 iops : min= 160, max= 244, avg=202.50, stdev=32.70, samples=20 00:19:51.551 lat (msec) : 50=13.52%, 100=72.66%, 250=13.82% 00:19:51.551 cpu : usr=32.54%, sys=1.78%, ctx=1028, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename1: (groupid=0, jobs=1): err= 0: pid=86477: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=207, BW=830KiB/s (850kB/s)(8320KiB/10021msec) 00:19:51.551 slat (usec): min=3, max=8025, avg=30.04, stdev=350.94 00:19:51.551 clat (msec): min=32, max=163, avg=76.93, stdev=21.30 00:19:51.551 lat (msec): min=32, max=163, avg=76.96, stdev=21.30 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 60], 00:19:51.551 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 83], 00:19:51.551 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 108], 00:19:51.551 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 163], 00:19:51.551 | 99.99th=[ 163] 00:19:51.551 bw ( KiB/s): min= 621, max= 1024, per=4.17%, avg=825.45, stdev=140.07, samples=20 00:19:51.551 iops : min= 155, max= 256, avg=206.35, stdev=35.04, samples=20 00:19:51.551 lat (msec) : 50=17.31%, 100=67.93%, 250=14.76% 00:19:51.551 cpu : usr=33.07%, sys=1.85%, ctx=941, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename1: (groupid=0, jobs=1): err= 0: pid=86478: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=207, BW=831KiB/s (850kB/s)(8316KiB/10013msec) 00:19:51.551 slat (usec): min=4, max=8023, avg=21.45, stdev=215.20 00:19:51.551 clat (msec): min=15, max=183, avg=76.92, stdev=22.52 00:19:51.551 lat (msec): min=15, max=183, avg=76.94, stdev=22.53 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:19:51.551 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:19:51.551 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 111], 00:19:51.551 | 99.00th=[ 132], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 184], 00:19:51.551 | 99.99th=[ 184] 00:19:51.551 bw ( KiB/s): min= 512, max= 1048, per=4.18%, avg=827.60, stdev=157.74, samples=20 00:19:51.551 iops : min= 128, max= 262, avg=206.90, stdev=39.44, samples=20 00:19:51.551 lat (msec) : 20=0.14%, 50=16.26%, 100=67.15%, 250=16.45% 00:19:51.551 cpu : usr=35.83%, sys=2.04%, ctx=1012, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=80.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename2: (groupid=0, jobs=1): err= 0: pid=86479: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=199, BW=798KiB/s (817kB/s)(8004KiB/10032msec) 00:19:51.551 slat (usec): min=4, max=8034, avg=22.09, stdev=253.40 00:19:51.551 clat (msec): min=14, max=143, avg=80.02, stdev=21.81 00:19:51.551 lat (msec): min=14, max=143, avg=80.04, stdev=21.81 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 62], 00:19:51.551 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:19:51.551 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:19:51.551 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 144], 00:19:51.551 | 99.99th=[ 144] 00:19:51.551 bw ( KiB/s): min= 604, max= 968, per=4.03%, avg=796.50, stdev=131.91, samples=20 00:19:51.551 iops : min= 151, max= 242, avg=199.10, stdev=32.96, samples=20 00:19:51.551 lat (msec) : 20=0.80%, 50=10.34%, 100=72.46%, 250=16.39% 00:19:51.551 cpu : usr=31.43%, sys=1.72%, ctx=867, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=76.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename2: (groupid=0, jobs=1): err= 0: pid=86480: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=199, BW=796KiB/s (815kB/s)(7984KiB/10027msec) 00:19:51.551 slat (usec): min=4, max=4025, avg=17.78, stdev=127.03 00:19:51.551 clat (msec): min=33, max=171, avg=80.24, stdev=22.55 00:19:51.551 lat (msec): min=33, max=171, avg=80.25, stdev=22.56 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:51.551 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 87], 00:19:51.551 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 113], 00:19:51.551 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 171], 99.95th=[ 171], 00:19:51.551 | 99.99th=[ 171] 00:19:51.551 bw ( KiB/s): min= 528, max= 1096, per=4.01%, avg=792.00, stdev=170.79, samples=20 00:19:51.551 iops : min= 132, max= 274, avg=198.00, stdev=42.70, samples=20 00:19:51.551 lat (msec) : 50=14.53%, 100=65.58%, 250=19.89% 00:19:51.551 cpu : usr=36.44%, sys=1.97%, ctx=1028, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=74.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename2: (groupid=0, jobs=1): err= 0: pid=86481: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=205, BW=823KiB/s (843kB/s)(8244KiB/10013msec) 00:19:51.551 slat (usec): min=3, max=8024, avg=19.73, stdev=197.31 00:19:51.551 clat (msec): min=15, max=154, avg=77.62, stdev=21.09 00:19:51.551 lat (msec): min=15, max=154, avg=77.64, stdev=21.09 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:51.551 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:19:51.551 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 108], 00:19:51.551 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 155], 00:19:51.551 | 99.99th=[ 155] 00:19:51.551 bw ( KiB/s): min= 624, max= 1072, per=4.15%, avg=820.80, stdev=149.61, samples=20 00:19:51.551 iops : min= 156, max= 268, avg=205.20, stdev=37.40, samples=20 00:19:51.551 lat (msec) : 20=0.29%, 50=15.09%, 100=70.01%, 250=14.60% 00:19:51.551 cpu : usr=31.41%, sys=1.79%, ctx=909, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=88.6%, 8=10.3%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename2: (groupid=0, jobs=1): err= 0: pid=86482: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=211, BW=845KiB/s (865kB/s)(8452KiB/10003msec) 00:19:51.551 slat (usec): min=4, max=8036, avg=25.16, stdev=302.03 00:19:51.551 clat (msec): min=4, max=219, avg=75.59, stdev=25.38 00:19:51.551 lat (msec): min=4, max=219, avg=75.62, stdev=25.38 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 57], 00:19:51.551 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:19:51.551 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:19:51.551 | 99.00th=[ 132], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 220], 00:19:51.551 | 99.99th=[ 220] 00:19:51.551 bw ( KiB/s): min= 512, max= 1024, per=4.13%, avg=817.05, stdev=170.59, samples=19 00:19:51.551 iops : min= 128, max= 256, avg=204.26, stdev=42.65, samples=19 00:19:51.551 lat (msec) : 10=1.66%, 20=0.33%, 50=17.42%, 100=66.78%, 250=13.82% 00:19:51.551 cpu : usr=31.38%, sys=1.77%, ctx=857, majf=0, minf=9 00:19:51.551 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:51.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.551 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.551 filename2: (groupid=0, jobs=1): err= 0: pid=86483: Sun Nov 17 18:28:47 2024 00:19:51.551 read: IOPS=196, BW=785KiB/s (803kB/s)(7868KiB/10029msec) 00:19:51.551 slat (usec): min=8, max=8037, avg=27.56, stdev=279.24 00:19:51.551 clat (msec): min=34, max=155, avg=81.37, stdev=24.68 00:19:51.551 lat (msec): min=34, max=155, avg=81.40, stdev=24.70 00:19:51.551 clat percentiles (msec): 00:19:51.551 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:19:51.551 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 88], 00:19:51.551 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 129], 00:19:51.551 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:19:51.551 | 99.99th=[ 155] 00:19:51.551 bw ( KiB/s): min= 512, max= 1048, per=3.96%, avg=782.80, stdev=177.17, samples=20 00:19:51.551 iops : min= 128, max= 262, avg=195.70, stdev=44.29, samples=20 00:19:51.551 lat (msec) : 50=13.27%, 100=64.01%, 250=22.72% 00:19:51.552 cpu : usr=42.42%, sys=2.15%, ctx=1225, majf=0, minf=9 00:19:51.552 IO depths : 1=0.1%, 2=2.3%, 4=9.5%, 8=73.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:51.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 complete : 0=0.0%, 4=89.9%, 8=8.0%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.552 filename2: (groupid=0, jobs=1): err= 0: pid=86484: Sun Nov 17 18:28:47 2024 00:19:51.552 read: IOPS=214, BW=859KiB/s (879kB/s)(8596KiB/10012msec) 00:19:51.552 slat (usec): min=4, max=6344, avg=30.07, stdev=264.65 00:19:51.552 clat (msec): min=16, max=177, avg=74.40, stdev=22.49 00:19:51.552 lat (msec): min=16, max=177, avg=74.43, stdev=22.49 00:19:51.552 clat percentiles (msec): 00:19:51.552 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:19:51.552 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:19:51.552 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:19:51.552 | 99.00th=[ 129], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 178], 00:19:51.552 | 99.99th=[ 178] 00:19:51.552 bw ( KiB/s): min= 513, max= 1072, per=4.31%, avg=853.25, stdev=158.33, samples=20 00:19:51.552 iops : min= 128, max= 268, avg=213.30, stdev=39.61, samples=20 00:19:51.552 lat (msec) : 20=0.33%, 50=17.03%, 100=67.61%, 250=15.03% 00:19:51.552 cpu : usr=43.08%, sys=2.42%, ctx=1381, majf=0, minf=9 00:19:51.552 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:51.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.552 filename2: (groupid=0, jobs=1): err= 0: pid=86485: Sun Nov 17 18:28:47 2024 00:19:51.552 read: IOPS=210, BW=841KiB/s (861kB/s)(8424KiB/10013msec) 00:19:51.552 slat (usec): min=4, max=5032, avg=30.30, stdev=254.13 00:19:51.552 clat (msec): min=15, max=182, avg=75.93, stdev=22.73 00:19:51.552 lat (msec): min=15, max=182, avg=75.96, stdev=22.72 00:19:51.552 clat percentiles (msec): 00:19:51.552 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 54], 00:19:51.552 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 80], 00:19:51.552 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 111], 00:19:51.552 | 99.00th=[ 130], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 182], 00:19:51.552 | 99.99th=[ 182] 00:19:51.552 bw ( KiB/s): min= 512, max= 1080, per=4.23%, avg=836.00, stdev=161.10, samples=20 00:19:51.552 iops : min= 128, max= 270, avg=209.00, stdev=40.28, samples=20 00:19:51.552 lat (msec) : 20=0.33%, 50=15.53%, 100=66.95%, 250=17.19% 00:19:51.552 cpu : usr=44.02%, sys=2.11%, ctx=1498, majf=0, minf=9 00:19:51.552 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:51.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.552 filename2: (groupid=0, jobs=1): err= 0: pid=86486: Sun Nov 17 18:28:47 2024 00:19:51.552 read: IOPS=210, BW=843KiB/s (863kB/s)(8428KiB/10002msec) 00:19:51.552 slat (usec): min=3, max=4025, avg=17.51, stdev=123.63 00:19:51.552 clat (msec): min=2, max=155, avg=75.87, stdev=25.78 00:19:51.552 lat (msec): min=2, max=155, avg=75.88, stdev=25.78 00:19:51.552 clat percentiles (msec): 00:19:51.552 | 1.00th=[ 5], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 49], 00:19:51.552 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:19:51.552 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 115], 00:19:51.552 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 157], 00:19:51.552 | 99.99th=[ 157] 00:19:51.552 bw ( KiB/s): min= 512, max= 1024, per=4.09%, avg=808.53, stdev=179.17, samples=19 00:19:51.552 iops : min= 128, max= 256, avg=202.11, stdev=44.82, samples=19 00:19:51.552 lat (msec) : 4=0.81%, 10=1.95%, 50=18.75%, 100=61.08%, 250=17.42% 00:19:51.552 cpu : usr=36.06%, sys=1.83%, ctx=983, majf=0, minf=9 00:19:51.552 IO depths : 1=0.1%, 2=1.9%, 4=7.2%, 8=76.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:51.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.552 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.552 00:19:51.552 Run status group 0 (all jobs): 00:19:51.552 READ: bw=19.3MiB/s (20.2MB/s), 760KiB/s-902KiB/s (778kB/s-923kB/s), io=194MiB (203MB), run=10001-10041msec 00:19:51.552 18:28:47 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:51.552 18:28:47 -- target/dif.sh@43 -- # local sub 00:19:51.552 18:28:47 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.552 18:28:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:51.552 18:28:47 -- target/dif.sh@36 -- # local sub_id=0 00:19:51.552 18:28:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.552 18:28:47 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:51.552 18:28:47 -- target/dif.sh@36 -- # local sub_id=1 00:19:51.552 18:28:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.552 18:28:47 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:51.552 18:28:47 -- target/dif.sh@36 -- # local sub_id=2 00:19:51.552 18:28:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:51.552 18:28:47 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:51.552 18:28:47 -- target/dif.sh@115 -- # numjobs=2 00:19:51.552 18:28:47 -- target/dif.sh@115 -- # iodepth=8 00:19:51.552 18:28:47 -- target/dif.sh@115 -- # runtime=5 00:19:51.552 18:28:47 -- target/dif.sh@115 -- # files=1 00:19:51.552 18:28:47 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:51.552 18:28:47 -- target/dif.sh@28 -- # local sub 00:19:51.552 18:28:47 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.552 18:28:47 -- target/dif.sh@31 -- # create_subsystem 0 00:19:51.552 18:28:47 -- target/dif.sh@18 -- # local sub_id=0 00:19:51.552 18:28:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 bdev_null0 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 [2024-11-17 18:28:47.963349] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.552 18:28:47 -- target/dif.sh@31 -- # create_subsystem 1 00:19:51.552 18:28:47 -- target/dif.sh@18 -- # local sub_id=1 00:19:51.552 18:28:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 bdev_null1 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:51.552 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.552 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.552 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.552 18:28:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.553 18:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.553 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.553 18:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.553 18:28:47 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:51.553 18:28:47 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:51.553 18:28:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:51.553 18:28:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.553 18:28:48 -- nvmf/common.sh@520 -- # config=() 00:19:51.553 18:28:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.553 18:28:48 -- nvmf/common.sh@520 -- # local subsystem config 00:19:51.553 18:28:48 -- target/dif.sh@82 -- # gen_fio_conf 00:19:51.553 18:28:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:51.553 18:28:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.553 18:28:48 -- target/dif.sh@54 -- # local file 00:19:51.553 18:28:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.553 18:28:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.553 { 00:19:51.553 "params": { 00:19:51.553 "name": "Nvme$subsystem", 00:19:51.553 "trtype": "$TEST_TRANSPORT", 00:19:51.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.553 "adrfam": "ipv4", 00:19:51.553 "trsvcid": "$NVMF_PORT", 00:19:51.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.553 "hdgst": ${hdgst:-false}, 00:19:51.553 "ddgst": ${ddgst:-false} 00:19:51.553 }, 00:19:51.553 "method": "bdev_nvme_attach_controller" 00:19:51.553 } 00:19:51.553 EOF 00:19:51.553 )") 00:19:51.553 18:28:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:51.553 18:28:48 -- target/dif.sh@56 -- # cat 00:19:51.553 18:28:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.553 18:28:48 -- common/autotest_common.sh@1330 -- # shift 00:19:51.553 18:28:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:51.553 18:28:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.553 18:28:48 -- nvmf/common.sh@542 -- # cat 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.553 18:28:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:51.553 18:28:48 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:51.553 18:28:48 -- target/dif.sh@73 -- # cat 00:19:51.553 18:28:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.553 18:28:48 -- target/dif.sh@72 -- # (( file++ )) 00:19:51.553 18:28:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.553 { 00:19:51.553 "params": { 00:19:51.553 "name": "Nvme$subsystem", 00:19:51.553 "trtype": "$TEST_TRANSPORT", 00:19:51.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.553 "adrfam": "ipv4", 00:19:51.553 "trsvcid": "$NVMF_PORT", 00:19:51.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.553 "hdgst": ${hdgst:-false}, 00:19:51.553 "ddgst": ${ddgst:-false} 00:19:51.553 }, 00:19:51.553 "method": "bdev_nvme_attach_controller" 00:19:51.553 } 00:19:51.553 EOF 00:19:51.553 )") 00:19:51.553 18:28:48 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.553 18:28:48 -- nvmf/common.sh@542 -- # cat 00:19:51.553 18:28:48 -- nvmf/common.sh@544 -- # jq . 00:19:51.553 18:28:48 -- nvmf/common.sh@545 -- # IFS=, 00:19:51.553 18:28:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:51.553 "params": { 00:19:51.553 "name": "Nvme0", 00:19:51.553 "trtype": "tcp", 00:19:51.553 "traddr": "10.0.0.2", 00:19:51.553 "adrfam": "ipv4", 00:19:51.553 "trsvcid": "4420", 00:19:51.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:51.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:51.553 "hdgst": false, 00:19:51.553 "ddgst": false 00:19:51.553 }, 00:19:51.553 "method": "bdev_nvme_attach_controller" 00:19:51.553 },{ 00:19:51.553 "params": { 00:19:51.553 "name": "Nvme1", 00:19:51.553 "trtype": "tcp", 00:19:51.553 "traddr": "10.0.0.2", 00:19:51.553 "adrfam": "ipv4", 00:19:51.553 "trsvcid": "4420", 00:19:51.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.553 "hdgst": false, 00:19:51.553 "ddgst": false 00:19:51.553 }, 00:19:51.553 "method": "bdev_nvme_attach_controller" 00:19:51.553 }' 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:51.553 18:28:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:51.553 18:28:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:51.553 18:28:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:51.553 18:28:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:51.553 18:28:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:51.553 18:28:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.553 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:51.553 ... 00:19:51.553 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:51.553 ... 00:19:51.553 fio-3.35 00:19:51.553 Starting 4 threads 00:19:51.553 [2024-11-17 18:28:48.578537] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:51.553 [2024-11-17 18:28:48.578841] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:55.742 00:19:55.742 filename0: (groupid=0, jobs=1): err= 0: pid=86626: Sun Nov 17 18:28:53 2024 00:19:55.742 read: IOPS=2414, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5001msec) 00:19:55.742 slat (nsec): min=6752, max=55563, avg=12211.00, stdev=4830.31 00:19:55.742 clat (usec): min=1238, max=6570, avg=3284.16, stdev=975.59 00:19:55.742 lat (usec): min=1246, max=6587, avg=3296.37, stdev=975.70 00:19:55.742 clat percentiles (usec): 00:19:55.742 | 1.00th=[ 1926], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2212], 00:19:55.742 | 30.00th=[ 2376], 40.00th=[ 2606], 50.00th=[ 3097], 60.00th=[ 3949], 00:19:55.742 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:19:55.742 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5342], 00:19:55.742 | 99.99th=[ 5604] 00:19:55.742 bw ( KiB/s): min=19184, max=20176, per=28.91%, avg=19545.89, stdev=295.44, samples=9 00:19:55.742 iops : min= 2398, max= 2522, avg=2443.22, stdev=36.95, samples=9 00:19:55.742 lat (msec) : 2=2.24%, 4=59.08%, 10=38.69% 00:19:55.742 cpu : usr=90.44%, sys=8.50%, ctx=7, majf=0, minf=9 00:19:55.742 IO depths : 1=0.1%, 2=0.3%, 4=63.5%, 8=36.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.742 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.742 issued rwts: total=12077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.742 filename0: (groupid=0, jobs=1): err= 0: pid=86627: Sun Nov 17 18:28:53 2024 00:19:55.742 read: IOPS=1810, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5002msec) 00:19:55.742 slat (nsec): min=7216, max=61476, avg=14510.22, stdev=4310.06 00:19:55.742 clat (usec): min=1036, max=6735, avg=4363.89, stdev=371.83 00:19:55.742 lat (usec): min=1046, max=6748, avg=4378.40, stdev=371.74 00:19:55.742 clat percentiles (usec): 00:19:55.742 | 1.00th=[ 3064], 5.00th=[ 3654], 10.00th=[ 4113], 20.00th=[ 4293], 00:19:55.742 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:19:55.742 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:19:55.742 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5407], 00:19:55.743 | 99.99th=[ 6718] 00:19:55.743 bw ( KiB/s): min=13952, max=14608, per=21.10%, avg=14264.89, stdev=229.80, samples=9 00:19:55.743 iops : min= 1744, max= 1826, avg=1783.11, stdev=28.72, samples=9 00:19:55.743 lat (msec) : 2=0.56%, 4=8.77%, 10=90.67% 00:19:55.743 cpu : usr=91.74%, sys=7.50%, ctx=10, majf=0, minf=9 00:19:55.743 IO depths : 1=0.1%, 2=22.6%, 4=51.3%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.743 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.743 issued rwts: total=9055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.743 filename1: (groupid=0, jobs=1): err= 0: pid=86628: Sun Nov 17 18:28:53 2024 00:19:55.743 read: IOPS=2415, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5003msec) 00:19:55.743 slat (nsec): min=6811, max=62739, avg=13238.53, stdev=5440.95 00:19:55.743 clat (usec): min=1198, max=6626, avg=3278.35, stdev=975.11 00:19:55.743 lat (usec): min=1207, max=6640, avg=3291.58, stdev=973.67 00:19:55.743 clat percentiles (usec): 00:19:55.743 | 1.00th=[ 1909], 5.00th=[ 2040], 10.00th=[ 2114], 20.00th=[ 2212], 00:19:55.743 | 30.00th=[ 2376], 40.00th=[ 2606], 50.00th=[ 3064], 60.00th=[ 3949], 00:19:55.743 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:19:55.743 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5342], 00:19:55.743 | 99.99th=[ 5538] 00:19:55.743 bw ( KiB/s): min=19200, max=20064, per=28.93%, avg=19560.89, stdev=260.60, samples=9 00:19:55.743 iops : min= 2400, max= 2508, avg=2445.11, stdev=32.57, samples=9 00:19:55.743 lat (msec) : 2=2.61%, 4=58.76%, 10=38.64% 00:19:55.743 cpu : usr=91.68%, sys=7.30%, ctx=9, majf=0, minf=0 00:19:55.743 IO depths : 1=0.1%, 2=0.3%, 4=63.5%, 8=36.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.743 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.743 issued rwts: total=12087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.743 filename1: (groupid=0, jobs=1): err= 0: pid=86629: Sun Nov 17 18:28:53 2024 00:19:55.743 read: IOPS=1811, BW=14.2MiB/s (14.8MB/s)(70.8MiB/5001msec) 00:19:55.743 slat (usec): min=6, max=182, avg=14.96, stdev= 5.39 00:19:55.743 clat (usec): min=978, max=6736, avg=4358.69, stdev=384.17 00:19:55.743 lat (usec): min=985, max=6750, avg=4373.65, stdev=384.24 00:19:55.743 clat percentiles (usec): 00:19:55.743 | 1.00th=[ 2999], 5.00th=[ 3654], 10.00th=[ 4113], 20.00th=[ 4228], 00:19:55.743 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:19:55.743 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:19:55.743 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5407], 00:19:55.743 | 99.99th=[ 6718] 00:19:55.743 bw ( KiB/s): min=13952, max=14608, per=21.10%, avg=14264.89, stdev=229.80, samples=9 00:19:55.743 iops : min= 1744, max= 1826, avg=1783.11, stdev=28.72, samples=9 00:19:55.743 lat (usec) : 1000=0.03% 00:19:55.743 lat (msec) : 2=0.62%, 4=8.83%, 10=90.52% 00:19:55.743 cpu : usr=91.22%, sys=7.70%, ctx=71, majf=0, minf=0 00:19:55.743 IO depths : 1=0.1%, 2=22.6%, 4=51.3%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.743 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.743 issued rwts: total=9061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.743 00:19:55.743 Run status group 0 (all jobs): 00:19:55.743 READ: bw=66.0MiB/s (69.2MB/s), 14.1MiB/s-18.9MiB/s (14.8MB/s-19.8MB/s), io=330MiB (346MB), run=5001-5003msec 00:19:55.743 18:28:53 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:55.743 18:28:53 -- target/dif.sh@43 -- # local sub 00:19:55.743 18:28:53 -- target/dif.sh@45 -- # for sub in "$@" 00:19:55.743 18:28:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:55.743 18:28:53 -- target/dif.sh@36 -- # local sub_id=0 00:19:55.743 18:28:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@45 -- # for sub in "$@" 00:19:55.743 18:28:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:55.743 18:28:53 -- target/dif.sh@36 -- # local sub_id=1 00:19:55.743 18:28:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 ************************************ 00:19:55.743 END TEST fio_dif_rand_params 00:19:55.743 ************************************ 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 00:19:55.743 real 0m22.981s 00:19:55.743 user 2m3.763s 00:19:55.743 sys 0m8.318s 00:19:55.743 18:28:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 18:28:53 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:55.743 18:28:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:55.743 18:28:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 ************************************ 00:19:55.743 START TEST fio_dif_digest 00:19:55.743 ************************************ 00:19:55.743 18:28:53 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:55.743 18:28:53 -- target/dif.sh@123 -- # local NULL_DIF 00:19:55.743 18:28:53 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:55.743 18:28:53 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:55.743 18:28:53 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:55.743 18:28:53 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:55.743 18:28:53 -- target/dif.sh@127 -- # numjobs=3 00:19:55.743 18:28:53 -- target/dif.sh@127 -- # iodepth=3 00:19:55.743 18:28:53 -- target/dif.sh@127 -- # runtime=10 00:19:55.743 18:28:53 -- target/dif.sh@128 -- # hdgst=true 00:19:55.743 18:28:53 -- target/dif.sh@128 -- # ddgst=true 00:19:55.743 18:28:53 -- target/dif.sh@130 -- # create_subsystems 0 00:19:55.743 18:28:53 -- target/dif.sh@28 -- # local sub 00:19:55.743 18:28:53 -- target/dif.sh@30 -- # for sub in "$@" 00:19:55.743 18:28:53 -- target/dif.sh@31 -- # create_subsystem 0 00:19:55.743 18:28:53 -- target/dif.sh@18 -- # local sub_id=0 00:19:55.743 18:28:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 bdev_null0 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:55.743 18:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.743 18:28:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.743 [2024-11-17 18:28:53.993725] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.743 18:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.743 18:28:53 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:55.743 18:28:53 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:55.743 18:28:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:55.743 18:28:53 -- nvmf/common.sh@520 -- # config=() 00:19:55.743 18:28:53 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.743 18:28:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.743 18:28:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.743 18:28:53 -- target/dif.sh@82 -- # gen_fio_conf 00:19:55.743 18:28:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.743 { 00:19:55.743 "params": { 00:19:55.743 "name": "Nvme$subsystem", 00:19:55.743 "trtype": "$TEST_TRANSPORT", 00:19:55.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.743 "adrfam": "ipv4", 00:19:55.743 "trsvcid": "$NVMF_PORT", 00:19:55.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.743 "hdgst": ${hdgst:-false}, 00:19:55.743 "ddgst": ${ddgst:-false} 00:19:55.743 }, 00:19:55.743 "method": "bdev_nvme_attach_controller" 00:19:55.743 } 00:19:55.743 EOF 00:19:55.743 )") 00:19:55.743 18:28:53 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.743 18:28:53 -- target/dif.sh@54 -- # local file 00:19:55.743 18:28:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:55.743 18:28:53 -- target/dif.sh@56 -- # cat 00:19:55.743 18:28:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.743 18:28:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:55.743 18:28:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.743 18:28:54 -- common/autotest_common.sh@1330 -- # shift 00:19:55.744 18:28:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:55.744 18:28:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.744 18:28:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:55.744 18:28:54 -- nvmf/common.sh@542 -- # cat 00:19:55.744 18:28:54 -- target/dif.sh@72 -- # (( file <= files )) 00:19:55.744 18:28:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.744 18:28:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:55.744 18:28:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:56.001 18:28:54 -- nvmf/common.sh@544 -- # jq . 00:19:56.001 18:28:54 -- nvmf/common.sh@545 -- # IFS=, 00:19:56.001 18:28:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:56.001 "params": { 00:19:56.001 "name": "Nvme0", 00:19:56.001 "trtype": "tcp", 00:19:56.001 "traddr": "10.0.0.2", 00:19:56.001 "adrfam": "ipv4", 00:19:56.001 "trsvcid": "4420", 00:19:56.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:56.001 "hdgst": true, 00:19:56.001 "ddgst": true 00:19:56.001 }, 00:19:56.001 "method": "bdev_nvme_attach_controller" 00:19:56.001 }' 00:19:56.001 18:28:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:56.001 18:28:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:56.001 18:28:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.001 18:28:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:56.001 18:28:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.001 18:28:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:56.001 18:28:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:56.001 18:28:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:56.001 18:28:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:56.001 18:28:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.001 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:56.001 ... 00:19:56.001 fio-3.35 00:19:56.001 Starting 3 threads 00:19:56.259 [2024-11-17 18:28:54.511438] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:56.259 [2024-11-17 18:28:54.511520] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:08.464 00:20:08.464 filename0: (groupid=0, jobs=1): err= 0: pid=86735: Sun Nov 17 18:29:04 2024 00:20:08.464 read: IOPS=234, BW=29.4MiB/s (30.8MB/s)(294MiB/10009msec) 00:20:08.464 slat (nsec): min=6961, max=60391, avg=15331.46, stdev=5890.67 00:20:08.464 clat (usec): min=11582, max=15207, avg=12733.01, stdev=587.05 00:20:08.464 lat (usec): min=11597, max=15233, avg=12748.35, stdev=587.50 00:20:08.464 clat percentiles (usec): 00:20:08.464 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:20:08.464 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:20:08.464 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:20:08.464 | 99.00th=[14091], 99.50th=[14222], 99.90th=[15139], 99.95th=[15139], 00:20:08.464 | 99.99th=[15270] 00:20:08.464 bw ( KiB/s): min=29184, max=31488, per=33.33%, avg=30073.26, stdev=640.67, samples=19 00:20:08.464 iops : min= 228, max= 246, avg=234.95, stdev= 5.01, samples=19 00:20:08.464 lat (msec) : 20=100.00% 00:20:08.464 cpu : usr=91.52%, sys=7.93%, ctx=8, majf=0, minf=9 00:20:08.464 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.464 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.464 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.464 filename0: (groupid=0, jobs=1): err= 0: pid=86736: Sun Nov 17 18:29:04 2024 00:20:08.464 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(294MiB/10007msec) 00:20:08.464 slat (nsec): min=7078, max=65153, avg=16617.89, stdev=6067.38 00:20:08.464 clat (usec): min=11559, max=14248, avg=12725.68, stdev=579.45 00:20:08.464 lat (usec): min=11573, max=14264, avg=12742.30, stdev=579.97 00:20:08.464 clat percentiles (usec): 00:20:08.464 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:20:08.464 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:20:08.464 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:20:08.464 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:20:08.464 | 99.99th=[14222] 00:20:08.464 bw ( KiB/s): min=29184, max=30720, per=33.37%, avg=30113.68, stdev=604.67, samples=19 00:20:08.465 iops : min= 228, max= 240, avg=235.26, stdev= 4.72, samples=19 00:20:08.465 lat (msec) : 20=100.00% 00:20:08.465 cpu : usr=90.80%, sys=8.57%, ctx=76, majf=0, minf=11 00:20:08.465 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.465 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=86737: Sun Nov 17 18:29:04 2024 00:20:08.465 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(294MiB/10007msec) 00:20:08.465 slat (nsec): min=7299, max=62229, avg=16235.86, stdev=5649.10 00:20:08.465 clat (usec): min=11560, max=14260, avg=12728.51, stdev=580.44 00:20:08.465 lat (usec): min=11573, max=14276, avg=12744.75, stdev=580.98 00:20:08.465 clat percentiles (usec): 00:20:08.465 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:20:08.465 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:20:08.465 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:20:08.465 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:20:08.465 | 99.99th=[14222] 00:20:08.465 bw ( KiB/s): min=29184, max=30720, per=33.37%, avg=30113.68, stdev=604.67, samples=19 00:20:08.465 iops : min= 228, max= 240, avg=235.26, stdev= 4.72, samples=19 00:20:08.465 lat (msec) : 20=100.00% 00:20:08.465 cpu : usr=91.47%, sys=8.03%, ctx=9, majf=0, minf=9 00:20:08.465 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.465 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.465 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.465 00:20:08.465 Run status group 0 (all jobs): 00:20:08.465 READ: bw=88.1MiB/s (92.4MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=882MiB (925MB), run=10007-10009msec 00:20:08.465 18:29:04 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:08.465 18:29:04 -- target/dif.sh@43 -- # local sub 00:20:08.465 18:29:04 -- target/dif.sh@45 -- # for sub in "$@" 00:20:08.465 18:29:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:08.465 18:29:04 -- target/dif.sh@36 -- # local sub_id=0 00:20:08.465 18:29:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:08.465 18:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.465 18:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:08.465 18:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.465 18:29:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:08.465 18:29:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.465 18:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:08.465 18:29:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.465 ************************************ 00:20:08.465 END TEST fio_dif_digest 00:20:08.465 ************************************ 00:20:08.465 00:20:08.465 real 0m10.867s 00:20:08.465 user 0m27.919s 00:20:08.465 sys 0m2.677s 00:20:08.465 18:29:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.465 18:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:08.465 18:29:04 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:08.465 18:29:04 -- target/dif.sh@147 -- # nvmftestfini 00:20:08.465 18:29:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.465 18:29:04 -- nvmf/common.sh@116 -- # sync 00:20:08.465 18:29:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:08.465 18:29:04 -- nvmf/common.sh@119 -- # set +e 00:20:08.465 18:29:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.465 18:29:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:08.465 rmmod nvme_tcp 00:20:08.465 rmmod nvme_fabrics 00:20:08.465 rmmod nvme_keyring 00:20:08.465 18:29:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.465 18:29:04 -- nvmf/common.sh@123 -- # set -e 00:20:08.465 18:29:04 -- nvmf/common.sh@124 -- # return 0 00:20:08.465 18:29:04 -- nvmf/common.sh@477 -- # '[' -n 85983 ']' 00:20:08.465 18:29:04 -- nvmf/common.sh@478 -- # killprocess 85983 00:20:08.465 18:29:04 -- common/autotest_common.sh@936 -- # '[' -z 85983 ']' 00:20:08.465 18:29:04 -- common/autotest_common.sh@940 -- # kill -0 85983 00:20:08.465 18:29:04 -- common/autotest_common.sh@941 -- # uname 00:20:08.465 18:29:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.465 18:29:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85983 00:20:08.465 killing process with pid 85983 00:20:08.465 18:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.465 18:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.465 18:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85983' 00:20:08.465 18:29:05 -- common/autotest_common.sh@955 -- # kill 85983 00:20:08.465 18:29:05 -- common/autotest_common.sh@960 -- # wait 85983 00:20:08.465 18:29:05 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:08.465 18:29:05 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:08.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:08.465 Waiting for block devices as requested 00:20:08.465 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:08.465 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:08.465 18:29:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.465 18:29:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.465 18:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.465 18:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:08.465 18:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.465 18:29:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:08.465 00:20:08.465 real 0m58.881s 00:20:08.465 user 3m46.312s 00:20:08.465 sys 0m19.497s 00:20:08.465 18:29:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.465 18:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:08.465 ************************************ 00:20:08.465 END TEST nvmf_dif 00:20:08.465 ************************************ 00:20:08.465 18:29:05 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:08.465 18:29:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:08.465 18:29:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.465 18:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:08.465 ************************************ 00:20:08.465 START TEST nvmf_abort_qd_sizes 00:20:08.465 ************************************ 00:20:08.465 18:29:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:08.465 * Looking for test storage... 00:20:08.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:08.465 18:29:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.465 18:29:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.465 18:29:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.465 18:29:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.465 18:29:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.465 18:29:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.465 18:29:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.465 18:29:05 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.465 18:29:05 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.465 18:29:05 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.465 18:29:05 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.465 18:29:05 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.465 18:29:05 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.465 18:29:05 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.465 18:29:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.465 18:29:05 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.465 18:29:05 -- scripts/common.sh@344 -- # : 1 00:20:08.465 18:29:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.465 18:29:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.465 18:29:05 -- scripts/common.sh@364 -- # decimal 1 00:20:08.465 18:29:05 -- scripts/common.sh@352 -- # local d=1 00:20:08.465 18:29:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.465 18:29:05 -- scripts/common.sh@354 -- # echo 1 00:20:08.465 18:29:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.465 18:29:05 -- scripts/common.sh@365 -- # decimal 2 00:20:08.465 18:29:05 -- scripts/common.sh@352 -- # local d=2 00:20:08.465 18:29:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.465 18:29:05 -- scripts/common.sh@354 -- # echo 2 00:20:08.465 18:29:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.465 18:29:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.465 18:29:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.465 18:29:05 -- scripts/common.sh@367 -- # return 0 00:20:08.465 18:29:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.465 18:29:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.465 --rc genhtml_branch_coverage=1 00:20:08.465 --rc genhtml_function_coverage=1 00:20:08.465 --rc genhtml_legend=1 00:20:08.465 --rc geninfo_all_blocks=1 00:20:08.465 --rc geninfo_unexecuted_blocks=1 00:20:08.465 00:20:08.465 ' 00:20:08.465 18:29:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.465 --rc genhtml_branch_coverage=1 00:20:08.465 --rc genhtml_function_coverage=1 00:20:08.465 --rc genhtml_legend=1 00:20:08.465 --rc geninfo_all_blocks=1 00:20:08.465 --rc geninfo_unexecuted_blocks=1 00:20:08.465 00:20:08.465 ' 00:20:08.465 18:29:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.465 --rc genhtml_branch_coverage=1 00:20:08.465 --rc genhtml_function_coverage=1 00:20:08.465 --rc genhtml_legend=1 00:20:08.465 --rc geninfo_all_blocks=1 00:20:08.465 --rc geninfo_unexecuted_blocks=1 00:20:08.465 00:20:08.465 ' 00:20:08.465 18:29:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.465 --rc genhtml_branch_coverage=1 00:20:08.465 --rc genhtml_function_coverage=1 00:20:08.465 --rc genhtml_legend=1 00:20:08.465 --rc geninfo_all_blocks=1 00:20:08.465 --rc geninfo_unexecuted_blocks=1 00:20:08.465 00:20:08.465 ' 00:20:08.465 18:29:05 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.465 18:29:05 -- nvmf/common.sh@7 -- # uname -s 00:20:08.465 18:29:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.465 18:29:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.465 18:29:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.465 18:29:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.465 18:29:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.465 18:29:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.465 18:29:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.465 18:29:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.465 18:29:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.465 18:29:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 00:20:08.465 18:29:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=f1ec9f72-7473-4a4e-a03d-121531763870 00:20:08.465 18:29:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.465 18:29:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.465 18:29:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.465 18:29:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.465 18:29:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.465 18:29:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.465 18:29:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.465 18:29:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.465 18:29:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.465 18:29:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.465 18:29:05 -- paths/export.sh@5 -- # export PATH 00:20:08.465 18:29:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.465 18:29:05 -- nvmf/common.sh@46 -- # : 0 00:20:08.465 18:29:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.465 18:29:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.465 18:29:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.465 18:29:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.465 18:29:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.465 18:29:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.465 18:29:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.465 18:29:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.465 18:29:05 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:08.465 18:29:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:08.465 18:29:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.465 18:29:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.465 18:29:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.465 18:29:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.465 18:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.465 18:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:08.465 18:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.465 18:29:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:08.465 18:29:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:08.465 18:29:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.465 18:29:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.465 18:29:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:08.465 18:29:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:08.465 18:29:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.466 18:29:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.466 18:29:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.466 18:29:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.466 18:29:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.466 18:29:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.466 18:29:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.466 18:29:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.466 18:29:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:08.466 18:29:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:08.466 Cannot find device "nvmf_tgt_br" 00:20:08.466 18:29:06 -- nvmf/common.sh@154 -- # true 00:20:08.466 18:29:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.466 Cannot find device "nvmf_tgt_br2" 00:20:08.466 18:29:06 -- nvmf/common.sh@155 -- # true 00:20:08.466 18:29:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:08.466 18:29:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:08.466 Cannot find device "nvmf_tgt_br" 00:20:08.466 18:29:06 -- nvmf/common.sh@157 -- # true 00:20:08.466 18:29:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:08.466 Cannot find device "nvmf_tgt_br2" 00:20:08.466 18:29:06 -- nvmf/common.sh@158 -- # true 00:20:08.466 18:29:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:08.466 18:29:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:08.466 18:29:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.466 18:29:06 -- nvmf/common.sh@161 -- # true 00:20:08.466 18:29:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.466 18:29:06 -- nvmf/common.sh@162 -- # true 00:20:08.466 18:29:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.466 18:29:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.466 18:29:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.466 18:29:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.466 18:29:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.466 18:29:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.466 18:29:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.466 18:29:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:08.466 18:29:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:08.466 18:29:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:08.466 18:29:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:08.466 18:29:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:08.466 18:29:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:08.466 18:29:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.466 18:29:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.466 18:29:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.466 18:29:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:08.466 18:29:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:08.466 18:29:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.466 18:29:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.466 18:29:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.466 18:29:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.466 18:29:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.466 18:29:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:08.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:08.466 00:20:08.466 --- 10.0.0.2 ping statistics --- 00:20:08.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.466 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:08.466 18:29:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:08.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:20:08.466 00:20:08.466 --- 10.0.0.3 ping statistics --- 00:20:08.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.466 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:08.466 18:29:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:20:08.466 00:20:08.466 --- 10.0.0.1 ping statistics --- 00:20:08.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.466 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:08.466 18:29:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.466 18:29:06 -- nvmf/common.sh@421 -- # return 0 00:20:08.466 18:29:06 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:08.466 18:29:06 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:08.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:08.984 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:08.984 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:08.984 18:29:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.984 18:29:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:08.984 18:29:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:08.984 18:29:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.984 18:29:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:08.984 18:29:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:08.984 18:29:07 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:08.984 18:29:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:08.984 18:29:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.984 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.984 18:29:07 -- nvmf/common.sh@469 -- # nvmfpid=87345 00:20:08.984 18:29:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:08.984 18:29:07 -- nvmf/common.sh@470 -- # waitforlisten 87345 00:20:08.984 18:29:07 -- common/autotest_common.sh@829 -- # '[' -z 87345 ']' 00:20:08.984 18:29:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.984 18:29:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.984 18:29:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.984 18:29:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.984 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.984 [2024-11-17 18:29:07.240914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:08.984 [2024-11-17 18:29:07.241007] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.242 [2024-11-17 18:29:07.376141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.242 [2024-11-17 18:29:07.419290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.242 [2024-11-17 18:29:07.419469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.242 [2024-11-17 18:29:07.419486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.242 [2024-11-17 18:29:07.419497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.242 [2024-11-17 18:29:07.420468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.242 [2024-11-17 18:29:07.420560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.242 [2024-11-17 18:29:07.420695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.242 [2024-11-17 18:29:07.420700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.242 18:29:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.242 18:29:07 -- common/autotest_common.sh@862 -- # return 0 00:20:09.243 18:29:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:09.243 18:29:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.243 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.502 18:29:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.502 18:29:07 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:09.502 18:29:07 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:09.503 18:29:07 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:09.503 18:29:07 -- scripts/common.sh@312 -- # local nvmes 00:20:09.503 18:29:07 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:09.503 18:29:07 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:09.503 18:29:07 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:09.503 18:29:07 -- scripts/common.sh@297 -- # local bdf= 00:20:09.503 18:29:07 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:09.503 18:29:07 -- scripts/common.sh@232 -- # local class 00:20:09.503 18:29:07 -- scripts/common.sh@233 -- # local subclass 00:20:09.503 18:29:07 -- scripts/common.sh@234 -- # local progif 00:20:09.503 18:29:07 -- scripts/common.sh@235 -- # printf %02x 1 00:20:09.503 18:29:07 -- scripts/common.sh@235 -- # class=01 00:20:09.503 18:29:07 -- scripts/common.sh@236 -- # printf %02x 8 00:20:09.503 18:29:07 -- scripts/common.sh@236 -- # subclass=08 00:20:09.503 18:29:07 -- scripts/common.sh@237 -- # printf %02x 2 00:20:09.503 18:29:07 -- scripts/common.sh@237 -- # progif=02 00:20:09.503 18:29:07 -- scripts/common.sh@239 -- # hash lspci 00:20:09.503 18:29:07 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:09.503 18:29:07 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:09.503 18:29:07 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:09.503 18:29:07 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:09.503 18:29:07 -- scripts/common.sh@244 -- # tr -d '"' 00:20:09.503 18:29:07 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:09.503 18:29:07 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:09.503 18:29:07 -- scripts/common.sh@15 -- # local i 00:20:09.503 18:29:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:09.503 18:29:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:09.503 18:29:07 -- scripts/common.sh@24 -- # return 0 00:20:09.503 18:29:07 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:09.503 18:29:07 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:09.503 18:29:07 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:09.503 18:29:07 -- scripts/common.sh@15 -- # local i 00:20:09.503 18:29:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:09.503 18:29:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:09.503 18:29:07 -- scripts/common.sh@24 -- # return 0 00:20:09.503 18:29:07 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:09.503 18:29:07 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:09.503 18:29:07 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:09.503 18:29:07 -- scripts/common.sh@322 -- # uname -s 00:20:09.503 18:29:07 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:09.503 18:29:07 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:09.503 18:29:07 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:09.503 18:29:07 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:09.503 18:29:07 -- scripts/common.sh@322 -- # uname -s 00:20:09.503 18:29:07 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:09.503 18:29:07 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:09.503 18:29:07 -- scripts/common.sh@327 -- # (( 2 )) 00:20:09.503 18:29:07 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:09.503 18:29:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:09.503 18:29:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.503 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 ************************************ 00:20:09.503 START TEST spdk_target_abort 00:20:09.503 ************************************ 00:20:09.503 18:29:07 -- common/autotest_common.sh@1114 -- # spdk_target 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:09.503 18:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.503 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 spdk_targetn1 00:20:09.503 18:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.503 18:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.503 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 [2024-11-17 18:29:07.667470] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.503 18:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:09.503 18:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.503 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 18:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:09.503 18:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.503 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 18:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:09.503 18:29:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.503 18:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.503 [2024-11-17 18:29:07.699771] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.503 18:29:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:09.503 18:29:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:12.788 Initializing NVMe Controllers 00:20:12.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:12.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:12.788 Initialization complete. Launching workers. 00:20:12.788 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10272, failed: 0 00:20:12.788 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1108, failed to submit 9164 00:20:12.788 success 905, unsuccess 203, failed 0 00:20:12.788 18:29:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:12.788 18:29:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:16.075 Initializing NVMe Controllers 00:20:16.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:16.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:16.075 Initialization complete. Launching workers. 00:20:16.075 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8958, failed: 0 00:20:16.075 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1236, failed to submit 7722 00:20:16.075 success 427, unsuccess 809, failed 0 00:20:16.075 18:29:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:16.075 18:29:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:19.364 Initializing NVMe Controllers 00:20:19.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:19.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:19.364 Initialization complete. Launching workers. 00:20:19.364 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32329, failed: 0 00:20:19.364 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2504, failed to submit 29825 00:20:19.364 success 411, unsuccess 2093, failed 0 00:20:19.364 18:29:17 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:19.364 18:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.364 18:29:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.364 18:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.364 18:29:17 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:19.364 18:29:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.365 18:29:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.624 18:29:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.624 18:29:17 -- target/abort_qd_sizes.sh@62 -- # killprocess 87345 00:20:19.624 18:29:17 -- common/autotest_common.sh@936 -- # '[' -z 87345 ']' 00:20:19.624 18:29:17 -- common/autotest_common.sh@940 -- # kill -0 87345 00:20:19.624 18:29:17 -- common/autotest_common.sh@941 -- # uname 00:20:19.624 18:29:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:19.624 18:29:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87345 00:20:19.624 18:29:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:19.624 18:29:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:19.624 killing process with pid 87345 00:20:19.624 18:29:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87345' 00:20:19.624 18:29:17 -- common/autotest_common.sh@955 -- # kill 87345 00:20:19.624 18:29:17 -- common/autotest_common.sh@960 -- # wait 87345 00:20:19.883 00:20:19.883 real 0m10.396s 00:20:19.883 user 0m39.671s 00:20:19.883 sys 0m1.984s 00:20:19.883 18:29:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:19.883 18:29:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.883 ************************************ 00:20:19.883 END TEST spdk_target_abort 00:20:19.883 ************************************ 00:20:19.883 18:29:18 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:19.883 18:29:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:19.883 18:29:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:19.883 18:29:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.883 ************************************ 00:20:19.883 START TEST kernel_target_abort 00:20:19.884 ************************************ 00:20:19.884 18:29:18 -- common/autotest_common.sh@1114 -- # kernel_target 00:20:19.884 18:29:18 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:19.884 18:29:18 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:19.884 18:29:18 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:19.884 18:29:18 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:19.884 18:29:18 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:19.884 18:29:18 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:19.884 18:29:18 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:19.884 18:29:18 -- nvmf/common.sh@627 -- # local block nvme 00:20:19.884 18:29:18 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:19.884 18:29:18 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:19.884 18:29:18 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:19.884 18:29:18 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:20.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.402 Waiting for block devices as requested 00:20:20.402 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.402 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.402 18:29:18 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:20.402 18:29:18 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:20.402 18:29:18 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:20.402 18:29:18 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:20.402 18:29:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:20.402 No valid GPT data, bailing 00:20:20.402 18:29:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:20.402 18:29:18 -- scripts/common.sh@393 -- # pt= 00:20:20.402 18:29:18 -- scripts/common.sh@394 -- # return 1 00:20:20.402 18:29:18 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:20.402 18:29:18 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:20.402 18:29:18 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:20.402 18:29:18 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:20.402 18:29:18 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:20.662 18:29:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:20.662 No valid GPT data, bailing 00:20:20.662 18:29:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:20.662 18:29:18 -- scripts/common.sh@393 -- # pt= 00:20:20.662 18:29:18 -- scripts/common.sh@394 -- # return 1 00:20:20.662 18:29:18 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:20.662 18:29:18 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:20.662 18:29:18 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:20.662 18:29:18 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:20.662 18:29:18 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:20.662 18:29:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:20.662 No valid GPT data, bailing 00:20:20.662 18:29:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:20.662 18:29:18 -- scripts/common.sh@393 -- # pt= 00:20:20.662 18:29:18 -- scripts/common.sh@394 -- # return 1 00:20:20.662 18:29:18 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:20.662 18:29:18 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:20.662 18:29:18 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:20.662 18:29:18 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:20.662 18:29:18 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:20.662 18:29:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:20.662 No valid GPT data, bailing 00:20:20.662 18:29:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:20.662 18:29:18 -- scripts/common.sh@393 -- # pt= 00:20:20.662 18:29:18 -- scripts/common.sh@394 -- # return 1 00:20:20.662 18:29:18 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:20.662 18:29:18 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:20.662 18:29:18 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:20.662 18:29:18 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:20.662 18:29:18 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:20.662 18:29:18 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:20.662 18:29:18 -- nvmf/common.sh@654 -- # echo 1 00:20:20.662 18:29:18 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:20.662 18:29:18 -- nvmf/common.sh@656 -- # echo 1 00:20:20.662 18:29:18 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:20.662 18:29:18 -- nvmf/common.sh@663 -- # echo tcp 00:20:20.662 18:29:18 -- nvmf/common.sh@664 -- # echo 4420 00:20:20.662 18:29:18 -- nvmf/common.sh@665 -- # echo ipv4 00:20:20.662 18:29:18 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:20.922 18:29:18 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f1ec9f72-7473-4a4e-a03d-121531763870 --hostid=f1ec9f72-7473-4a4e-a03d-121531763870 -a 10.0.0.1 -t tcp -s 4420 00:20:20.922 00:20:20.922 Discovery Log Number of Records 2, Generation counter 2 00:20:20.922 =====Discovery Log Entry 0====== 00:20:20.922 trtype: tcp 00:20:20.922 adrfam: ipv4 00:20:20.922 subtype: current discovery subsystem 00:20:20.922 treq: not specified, sq flow control disable supported 00:20:20.922 portid: 1 00:20:20.922 trsvcid: 4420 00:20:20.922 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:20.922 traddr: 10.0.0.1 00:20:20.922 eflags: none 00:20:20.922 sectype: none 00:20:20.922 =====Discovery Log Entry 1====== 00:20:20.922 trtype: tcp 00:20:20.922 adrfam: ipv4 00:20:20.922 subtype: nvme subsystem 00:20:20.922 treq: not specified, sq flow control disable supported 00:20:20.922 portid: 1 00:20:20.922 trsvcid: 4420 00:20:20.922 subnqn: kernel_target 00:20:20.922 traddr: 10.0.0.1 00:20:20.922 eflags: none 00:20:20.922 sectype: none 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:20.922 18:29:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:24.210 Initializing NVMe Controllers 00:20:24.210 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:24.210 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:24.210 Initialization complete. Launching workers. 00:20:24.210 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 28880, failed: 0 00:20:24.210 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28880, failed to submit 0 00:20:24.210 success 0, unsuccess 28880, failed 0 00:20:24.210 18:29:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:24.210 18:29:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:27.497 Initializing NVMe Controllers 00:20:27.497 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:27.497 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:27.497 Initialization complete. Launching workers. 00:20:27.497 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66198, failed: 0 00:20:27.497 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27869, failed to submit 38329 00:20:27.497 success 0, unsuccess 27869, failed 0 00:20:27.497 18:29:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:27.497 18:29:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:30.784 Initializing NVMe Controllers 00:20:30.784 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:30.784 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:30.784 Initialization complete. Launching workers. 00:20:30.784 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75786, failed: 0 00:20:30.784 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18906, failed to submit 56880 00:20:30.784 success 0, unsuccess 18906, failed 0 00:20:30.784 18:29:28 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:30.784 18:29:28 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:30.784 18:29:28 -- nvmf/common.sh@677 -- # echo 0 00:20:30.784 18:29:28 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:30.784 18:29:28 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:30.784 18:29:28 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:30.784 18:29:28 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:30.784 18:29:28 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:30.784 18:29:28 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:30.784 00:20:30.784 real 0m10.493s 00:20:30.784 user 0m5.413s 00:20:30.784 sys 0m2.491s 00:20:30.784 18:29:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:30.784 18:29:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.784 ************************************ 00:20:30.784 END TEST kernel_target_abort 00:20:30.784 ************************************ 00:20:30.784 18:29:28 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:30.784 18:29:28 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:30.784 18:29:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:30.784 18:29:28 -- nvmf/common.sh@116 -- # sync 00:20:30.784 18:29:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:30.784 18:29:28 -- nvmf/common.sh@119 -- # set +e 00:20:30.784 18:29:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:30.784 18:29:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:30.784 rmmod nvme_tcp 00:20:30.784 rmmod nvme_fabrics 00:20:30.784 rmmod nvme_keyring 00:20:30.784 18:29:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:30.784 18:29:28 -- nvmf/common.sh@123 -- # set -e 00:20:30.784 18:29:28 -- nvmf/common.sh@124 -- # return 0 00:20:30.784 18:29:28 -- nvmf/common.sh@477 -- # '[' -n 87345 ']' 00:20:30.785 18:29:28 -- nvmf/common.sh@478 -- # killprocess 87345 00:20:30.785 18:29:28 -- common/autotest_common.sh@936 -- # '[' -z 87345 ']' 00:20:30.785 18:29:28 -- common/autotest_common.sh@940 -- # kill -0 87345 00:20:30.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87345) - No such process 00:20:30.785 18:29:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87345 is not found' 00:20:30.785 Process with pid 87345 is not found 00:20:30.785 18:29:28 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:30.785 18:29:28 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:31.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.358 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:31.358 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:31.358 18:29:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:31.358 18:29:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:31.358 18:29:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.358 18:29:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:31.358 18:29:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.358 18:29:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:31.358 18:29:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.358 18:29:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:31.358 00:20:31.358 real 0m23.750s 00:20:31.358 user 0m46.394s 00:20:31.358 sys 0m5.732s 00:20:31.358 18:29:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:31.358 ************************************ 00:20:31.358 END TEST nvmf_abort_qd_sizes 00:20:31.358 ************************************ 00:20:31.358 18:29:29 -- common/autotest_common.sh@10 -- # set +x 00:20:31.358 18:29:29 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:31.358 18:29:29 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:31.358 18:29:29 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:31.358 18:29:29 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:31.358 18:29:29 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:31.358 18:29:29 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:31.358 18:29:29 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:31.358 18:29:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.358 18:29:29 -- common/autotest_common.sh@10 -- # set +x 00:20:31.358 18:29:29 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:31.358 18:29:29 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:31.358 18:29:29 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:31.358 18:29:29 -- common/autotest_common.sh@10 -- # set +x 00:20:33.264 INFO: APP EXITING 00:20:33.264 INFO: killing all VMs 00:20:33.264 INFO: killing vhost app 00:20:33.264 INFO: EXIT DONE 00:20:33.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.832 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:33.832 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:34.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.399 Cleaning 00:20:34.399 Removing: /var/run/dpdk/spdk0/config 00:20:34.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:34.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:34.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:34.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:34.399 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:34.399 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:34.658 Removing: /var/run/dpdk/spdk1/config 00:20:34.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:34.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:34.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:34.658 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:34.658 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:34.658 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:34.658 Removing: /var/run/dpdk/spdk2/config 00:20:34.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:34.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:34.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:34.658 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:34.658 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:34.658 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:34.658 Removing: /var/run/dpdk/spdk3/config 00:20:34.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:34.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:34.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:34.658 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:34.658 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:34.658 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:34.658 Removing: /var/run/dpdk/spdk4/config 00:20:34.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:34.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:34.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:34.658 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:34.658 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:34.658 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:34.658 Removing: /dev/shm/nvmf_trace.0 00:20:34.658 Removing: /dev/shm/spdk_tgt_trace.pid65564 00:20:34.658 Removing: /var/run/dpdk/spdk0 00:20:34.658 Removing: /var/run/dpdk/spdk1 00:20:34.658 Removing: /var/run/dpdk/spdk2 00:20:34.658 Removing: /var/run/dpdk/spdk3 00:20:34.658 Removing: /var/run/dpdk/spdk4 00:20:34.658 Removing: /var/run/dpdk/spdk_pid65418 00:20:34.658 Removing: /var/run/dpdk/spdk_pid65564 00:20:34.658 Removing: /var/run/dpdk/spdk_pid65817 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66008 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66161 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66238 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66310 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66408 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66492 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66525 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66555 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66629 00:20:34.658 Removing: /var/run/dpdk/spdk_pid66705 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67137 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67183 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67234 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67249 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67312 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67328 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67384 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67400 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67445 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67463 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67503 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67527 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67651 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67681 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67768 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67814 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67833 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67897 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67911 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67942 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67967 00:20:34.658 Removing: /var/run/dpdk/spdk_pid67996 00:20:34.658 Removing: /var/run/dpdk/spdk_pid68010 00:20:34.658 Removing: /var/run/dpdk/spdk_pid68045 00:20:34.658 Removing: /var/run/dpdk/spdk_pid68064 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68093 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68113 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68147 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68161 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68190 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68210 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68244 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68260 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68295 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68309 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68343 00:20:34.659 Removing: /var/run/dpdk/spdk_pid68366 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68395 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68409 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68449 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68463 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68492 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68510 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68546 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68560 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68589 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68614 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68643 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68657 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68692 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68714 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68746 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68764 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68806 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68820 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68854 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68869 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68904 00:20:34.918 Removing: /var/run/dpdk/spdk_pid68976 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69063 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69395 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69408 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69443 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69456 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69468 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69486 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69500 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69508 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69526 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69544 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69552 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69570 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69582 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69596 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69614 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69621 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69640 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69658 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69665 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69684 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69708 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69726 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69748 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69818 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69839 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69854 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69877 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69892 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69894 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69935 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69946 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69967 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69980 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69982 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69990 00:20:34.918 Removing: /var/run/dpdk/spdk_pid69997 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70005 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70012 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70014 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70046 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70067 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70077 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70105 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70115 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70122 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70163 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70173 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70195 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70203 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70210 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70218 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70225 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70227 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70235 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70242 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70318 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70360 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70466 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70497 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70544 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70559 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70579 00:20:34.918 Removing: /var/run/dpdk/spdk_pid70588 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70623 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70632 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70708 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70722 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70765 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70839 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70884 00:20:35.178 Removing: /var/run/dpdk/spdk_pid70907 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71005 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71046 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71077 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71301 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71387 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71415 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71739 00:20:35.178 Removing: /var/run/dpdk/spdk_pid71777 00:20:35.178 Removing: /var/run/dpdk/spdk_pid72088 00:20:35.178 Removing: /var/run/dpdk/spdk_pid72506 00:20:35.178 Removing: /var/run/dpdk/spdk_pid72773 00:20:35.178 Removing: /var/run/dpdk/spdk_pid73515 00:20:35.178 Removing: /var/run/dpdk/spdk_pid74329 00:20:35.178 Removing: /var/run/dpdk/spdk_pid74446 00:20:35.178 Removing: /var/run/dpdk/spdk_pid74508 00:20:35.178 Removing: /var/run/dpdk/spdk_pid75771 00:20:35.178 Removing: /var/run/dpdk/spdk_pid75994 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76303 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76416 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76545 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76573 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76601 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76620 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76707 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76842 00:20:35.178 Removing: /var/run/dpdk/spdk_pid76969 00:20:35.178 Removing: /var/run/dpdk/spdk_pid77039 00:20:35.178 Removing: /var/run/dpdk/spdk_pid77430 00:20:35.178 Removing: /var/run/dpdk/spdk_pid77783 00:20:35.178 Removing: /var/run/dpdk/spdk_pid77786 00:20:35.178 Removing: /var/run/dpdk/spdk_pid79996 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80002 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80287 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80301 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80321 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80346 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80356 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80441 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80447 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80556 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80558 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80671 00:20:35.178 Removing: /var/run/dpdk/spdk_pid80674 00:20:35.178 Removing: /var/run/dpdk/spdk_pid81080 00:20:35.178 Removing: /var/run/dpdk/spdk_pid81133 00:20:35.178 Removing: /var/run/dpdk/spdk_pid81238 00:20:35.178 Removing: /var/run/dpdk/spdk_pid81317 00:20:35.178 Removing: /var/run/dpdk/spdk_pid81620 00:20:35.178 Removing: /var/run/dpdk/spdk_pid81822 00:20:35.178 Removing: /var/run/dpdk/spdk_pid82218 00:20:35.178 Removing: /var/run/dpdk/spdk_pid82739 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83194 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83241 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83295 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83343 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83439 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83492 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83552 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83614 00:20:35.178 Removing: /var/run/dpdk/spdk_pid83931 00:20:35.178 Removing: /var/run/dpdk/spdk_pid85084 00:20:35.178 Removing: /var/run/dpdk/spdk_pid85230 00:20:35.178 Removing: /var/run/dpdk/spdk_pid85478 00:20:35.178 Removing: /var/run/dpdk/spdk_pid86040 00:20:35.178 Removing: /var/run/dpdk/spdk_pid86194 00:20:35.178 Removing: /var/run/dpdk/spdk_pid86355 00:20:35.178 Removing: /var/run/dpdk/spdk_pid86452 00:20:35.178 Removing: /var/run/dpdk/spdk_pid86617 00:20:35.178 Removing: /var/run/dpdk/spdk_pid86730 00:20:35.178 Removing: /var/run/dpdk/spdk_pid87383 00:20:35.178 Removing: /var/run/dpdk/spdk_pid87418 00:20:35.178 Removing: /var/run/dpdk/spdk_pid87459 00:20:35.178 Removing: /var/run/dpdk/spdk_pid87701 00:20:35.178 Removing: /var/run/dpdk/spdk_pid87732 00:20:35.178 Removing: /var/run/dpdk/spdk_pid87767 00:20:35.178 Clean 00:20:35.437 killing process with pid 59795 00:20:35.437 killing process with pid 59798 00:20:35.437 18:29:33 -- common/autotest_common.sh@1446 -- # return 0 00:20:35.437 18:29:33 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:35.437 18:29:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.437 18:29:33 -- common/autotest_common.sh@10 -- # set +x 00:20:35.437 18:29:33 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:35.437 18:29:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.437 18:29:33 -- common/autotest_common.sh@10 -- # set +x 00:20:35.437 18:29:33 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:35.437 18:29:33 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:35.437 18:29:33 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:35.437 18:29:33 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:35.437 18:29:33 -- spdk/autotest.sh@383 -- # hostname 00:20:35.437 18:29:33 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:35.696 geninfo: WARNING: invalid characters removed from testname! 00:20:57.637 18:29:55 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:01.830 18:29:59 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:03.736 18:30:01 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.026 18:30:04 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:08.966 18:30:07 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.500 18:30:09 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.034 18:30:11 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:14.034 18:30:12 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:21:14.034 18:30:12 -- common/autotest_common.sh@1690 -- $ lcov --version 00:21:14.034 18:30:12 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:21:14.034 18:30:12 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:21:14.034 18:30:12 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:21:14.034 18:30:12 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:21:14.034 18:30:12 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:21:14.034 18:30:12 -- scripts/common.sh@335 -- $ IFS=.-: 00:21:14.034 18:30:12 -- scripts/common.sh@335 -- $ read -ra ver1 00:21:14.034 18:30:12 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:14.034 18:30:12 -- scripts/common.sh@336 -- $ read -ra ver2 00:21:14.034 18:30:12 -- scripts/common.sh@337 -- $ local 'op=<' 00:21:14.034 18:30:12 -- scripts/common.sh@339 -- $ ver1_l=2 00:21:14.034 18:30:12 -- scripts/common.sh@340 -- $ ver2_l=1 00:21:14.034 18:30:12 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:21:14.034 18:30:12 -- scripts/common.sh@343 -- $ case "$op" in 00:21:14.034 18:30:12 -- scripts/common.sh@344 -- $ : 1 00:21:14.034 18:30:12 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:21:14.034 18:30:12 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.034 18:30:12 -- scripts/common.sh@364 -- $ decimal 1 00:21:14.034 18:30:12 -- scripts/common.sh@352 -- $ local d=1 00:21:14.034 18:30:12 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:14.034 18:30:12 -- scripts/common.sh@354 -- $ echo 1 00:21:14.034 18:30:12 -- scripts/common.sh@364 -- $ ver1[v]=1 00:21:14.034 18:30:12 -- scripts/common.sh@365 -- $ decimal 2 00:21:14.034 18:30:12 -- scripts/common.sh@352 -- $ local d=2 00:21:14.034 18:30:12 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:14.034 18:30:12 -- scripts/common.sh@354 -- $ echo 2 00:21:14.034 18:30:12 -- scripts/common.sh@365 -- $ ver2[v]=2 00:21:14.034 18:30:12 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:21:14.034 18:30:12 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:21:14.034 18:30:12 -- scripts/common.sh@367 -- $ return 0 00:21:14.034 18:30:12 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.034 18:30:12 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:21:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.034 --rc genhtml_branch_coverage=1 00:21:14.034 --rc genhtml_function_coverage=1 00:21:14.034 --rc genhtml_legend=1 00:21:14.034 --rc geninfo_all_blocks=1 00:21:14.034 --rc geninfo_unexecuted_blocks=1 00:21:14.034 00:21:14.034 ' 00:21:14.034 18:30:12 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:21:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.034 --rc genhtml_branch_coverage=1 00:21:14.034 --rc genhtml_function_coverage=1 00:21:14.034 --rc genhtml_legend=1 00:21:14.034 --rc geninfo_all_blocks=1 00:21:14.034 --rc geninfo_unexecuted_blocks=1 00:21:14.034 00:21:14.034 ' 00:21:14.034 18:30:12 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:21:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.034 --rc genhtml_branch_coverage=1 00:21:14.034 --rc genhtml_function_coverage=1 00:21:14.034 --rc genhtml_legend=1 00:21:14.034 --rc geninfo_all_blocks=1 00:21:14.034 --rc geninfo_unexecuted_blocks=1 00:21:14.034 00:21:14.034 ' 00:21:14.034 18:30:12 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:21:14.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.034 --rc genhtml_branch_coverage=1 00:21:14.034 --rc genhtml_function_coverage=1 00:21:14.034 --rc genhtml_legend=1 00:21:14.034 --rc geninfo_all_blocks=1 00:21:14.034 --rc geninfo_unexecuted_blocks=1 00:21:14.034 00:21:14.034 ' 00:21:14.034 18:30:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.034 18:30:12 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:14.034 18:30:12 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.034 18:30:12 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.034 18:30:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.034 18:30:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.034 18:30:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.034 18:30:12 -- paths/export.sh@5 -- $ export PATH 00:21:14.035 18:30:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.035 18:30:12 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:14.035 18:30:12 -- common/autobuild_common.sh@440 -- $ date +%s 00:21:14.035 18:30:12 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731868212.XXXXXX 00:21:14.035 18:30:12 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731868212.kdIUvA 00:21:14.035 18:30:12 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:21:14.035 18:30:12 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:21:14.035 18:30:12 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:14.035 18:30:12 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:14.035 18:30:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:14.035 18:30:12 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:14.035 18:30:12 -- common/autobuild_common.sh@456 -- $ get_config_params 00:21:14.035 18:30:12 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:21:14.035 18:30:12 -- common/autotest_common.sh@10 -- $ set +x 00:21:14.035 18:30:12 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:14.035 18:30:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:14.035 18:30:12 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:14.035 18:30:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:14.035 18:30:12 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:14.035 18:30:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:14.035 18:30:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:14.035 18:30:12 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:14.035 18:30:12 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:14.035 18:30:12 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:14.035 18:30:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:14.035 + [[ -n 5967 ]] 00:21:14.035 + sudo kill 5967 00:21:14.303 [Pipeline] } 00:21:14.320 [Pipeline] // timeout 00:21:14.326 [Pipeline] } 00:21:14.341 [Pipeline] // stage 00:21:14.347 [Pipeline] } 00:21:14.361 [Pipeline] // catchError 00:21:14.370 [Pipeline] stage 00:21:14.372 [Pipeline] { (Stop VM) 00:21:14.384 [Pipeline] sh 00:21:14.666 + vagrant halt 00:21:18.857 ==> default: Halting domain... 00:21:24.165 [Pipeline] sh 00:21:24.444 + vagrant destroy -f 00:21:27.731 ==> default: Removing domain... 00:21:27.744 [Pipeline] sh 00:21:28.025 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:28.035 [Pipeline] } 00:21:28.051 [Pipeline] // stage 00:21:28.058 [Pipeline] } 00:21:28.072 [Pipeline] // dir 00:21:28.077 [Pipeline] } 00:21:28.098 [Pipeline] // wrap 00:21:28.107 [Pipeline] } 00:21:28.122 [Pipeline] // catchError 00:21:28.132 [Pipeline] stage 00:21:28.135 [Pipeline] { (Epilogue) 00:21:28.151 [Pipeline] sh 00:21:28.436 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:33.718 [Pipeline] catchError 00:21:33.720 [Pipeline] { 00:21:33.733 [Pipeline] sh 00:21:34.015 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:34.274 Artifacts sizes are good 00:21:34.283 [Pipeline] } 00:21:34.298 [Pipeline] // catchError 00:21:34.310 [Pipeline] archiveArtifacts 00:21:34.319 Archiving artifacts 00:21:34.456 [Pipeline] cleanWs 00:21:34.471 [WS-CLEANUP] Deleting project workspace... 00:21:34.471 [WS-CLEANUP] Deferred wipeout is used... 00:21:34.500 [WS-CLEANUP] done 00:21:34.502 [Pipeline] } 00:21:34.517 [Pipeline] // stage 00:21:34.522 [Pipeline] } 00:21:34.534 [Pipeline] // node 00:21:34.540 [Pipeline] End of Pipeline 00:21:34.583 Finished: SUCCESS